Dismiss Notice
Guest, I have a big favor to ask you. We've been working very hard to establish ourselves on social media. If you like/follow our pages it would be a HUGE help to us. SoSH on Facebook and Inside the Pylon Thanks! Nip

Apple rumored to stop using Intel chips by 2020

Discussion in 'BYTE ME: Technology discussion' started by glasspusher, Apr 3, 2018.

  1. glasspusher

    glasspusher Member SoSH Member

    Messages:
    9,262
    Thought this would be an appropriate place to post this. Intel has not been able to penetrate the mobile market- their chips have high peak power requirements and as such would need bigger circuitry and other components to take advantage of their peak speeds. They've been able to get away with it in laptops, not so phones. ARM architectures are taking over...

    https://www.bloomberg.com/news/arti...an-move-from-intel-to-own-mac-chips-from-2020
     
  2. NortheasternPJ

    NortheasternPJ Member SoSH Member

    Messages:
    11,815
    I know I'm not the majority, but I really don't care anymore. I was so thankful years ago for Intel and x86 support and Boot Camp and everything else. In 2018, I don't need Windows really at all anymore at work. Office 2016 on Mac is actually quite good. Other things like VMWare VCenter, i can use on my Mac with the Web GUI. Really the only thing I ever use my Windows image for is Visio, which I can get rid of completely with LucidChart.

    I used to be in my Windows VM constantly but I really don't need it at all anymore. Intel has been a thorn in Apple's side now for awhile on the Macs.
     
  3. jimv

    jimv Member SoSH Member

    Messages:
    972
    Intel's execution in processors has been abysmal over the past few years. The 14nm rollout was delayed and piecemeal. The 10nm rollout is so far behind they had to produce additional 14nm designs to have some new product to market/sell. I'm not sure they have a roadmap beyond that (iow, 7nm etc). Meanwhile Samsung and TSMC have maintained their fabrication cadence

    Given that performance Apple should, very publicly, consider alternatives. But I have questions
    • Can ARM processors, with the Apple tweaks, deliver the requisite user experience. It worked for the iPhone, will it work for iMacs?
    • The switchover to iOS is interesting, can they pull off a new operating system and new hardware at the same time?
    • Who will fabricate the (iirc) 15-20 million chips?
     
  4. gtmtnbiker

    gtmtnbiker Member SoSH Member

    Messages:
    202
    I tried to exclusively use the Web UI but it’s not reliable so I find myself going back to the VMware thick client.
     
  5. Blacken

    Blacken Robespierre in a Cape SoSH Member

    Messages:
    11,983
    Most of the markets where an ARM Mac makes sense are markets where a less shitty iPad makes just about as much sense. It's very likely that we'll see downmarket Macbooks or iPad/Macbook convergence where battery life is a priority. But x86 is used because x86 has performance that, outside of fairly contrived benchmarks, not even Apple's ARM stuff is able to match. I am not going to be encoding video on an ARM Mac in 2020.

    But it might be a Threadripper, depending on where they go with the new Mac Pro.
     
  6. NortheasternPJ

    NortheasternPJ Member SoSH Member

    Messages:
    11,815
    I wouldn't be surprised by a move to ARM in the lower end and AMD in the higher end with the higher end having an ARM chip and AMD in it.

    I'd like to see how much more power Apple can get out of an ARM chip on something with a fan though and more space for cooling.
     
    #6 NortheasternPJ, Apr 4, 2018
    Last edited: Apr 4, 2018
  7. wade boggs chicken dinner

    wade boggs chicken dinner Member SoSH Member

    Messages:
    13,945
    I'm not a tech guy, but I'm confused. Isn't the only reason you can do all of this on your MacOS is because your Mac is running an intel chip?

    I'm interested in this thread because I may be the only person in this forum trying to keep up on MS's attempt to combine Windows 10 with Snapdragon processors. The first batch of machines are running Snapdragon 835, which is enough to power Windows 10 mobile but not enough to power any 64 bit x86 apps, among others (here's an article discussing these limitations: http://www.zdnet.com/article/window...more-limited-and-heres-how-reveals-Microsoft/). Bottom line is that most reviews say to skip the Snapdragon 835 machines and see how much better the Snapdragon 845 machines will be.
     
  8. shaggydog2000

    shaggydog2000 Member SoSH Member

    Messages:
    3,935
    It seems like another step, along with starting to make their own displays, to control more of their supply chain and make more things in house. Eventually they would aim to buy all the commodity parts on the open market, and make all the differentiating value add parts themselves.
     
  9. jimv

    jimv Member SoSH Member

    Messages:
    972
  10. nighthob

    nighthob Member SoSH Member

    Messages:
    6,014
    Yeah, I can see them moving their MacBook Air and iMac lines to ARM, the recent intro of the iMac Pro line seems to point in that direction. Put cheaper ARM based chips in the consumer end PCs and more powerful processors for the niche prosumer market, helping you to catch more consumer sales by lowering hardware prices.
     
  11. Blacken

    Blacken Robespierre in a Cape SoSH Member

    Messages:
    11,983
    They can probably do pretty well. Even very well. They can address ARM's biggest performance problems (limited execution resources, relatively short OOE depth). These are linear problems to solve and the transistor budget goes way up when you can use a chip that doesn't have to fit in an iPhone. However, this necessarily implies splitting the silicon team and tasking large parts of the team with building what is almost unexplored territory in ARM.

    We aren't talking modular components here. We're talking about significantly changing the overall guts of the chip. Apple's hardware design teams are, obviously, very good. But that's a difficult step to make. They design the A-series and S-series chips in-house, but, despite the "it's all custom" claims out there, the way they behave suggests that they're relatively incremental work on standard ARM designs (A11 is a pretty standard big.LITTLE design with some Apple secret sauce, S1 is an ARMv7 chip notable mostly for its downscaling). Even small permutations require a lot of work to do and to get right, as we've seen from the size of Apple's team. For PC-scale changes, you are looking at drastic reworks of the chip to the point where you almost might as well not use an ARM chip. I'm sure Apple is planning to try, even if they haven't so far. But they aren't guaranteed of success--the fuckup potential is high, because all that software written for x64, even if it's ported fat-binary style to OS X ARM, has implicit expectations around performance and behavior that may not hold if they see a significant perf regression.

    Apple will be competing with Sapphire Rapids at that point, with Intel probably being in 7nm territory and having all that accumulated experience from now 'til then. I wouldn't let Intel's complacency over the last few years put much doubt into the fact that Intel is the best in the world at what they do. (I am excited to see what they do now that AMD has figured out which is their ass and which is their elbow.)
     
    #11 Blacken, Apr 4, 2018
    Last edited: Apr 4, 2018
  12. cgori

    cgori Well-Known Member Silver Supporter SoSH Member

    Messages:
    2,102
    I think you guys are misreading this a little bit. It's a business decision primarily, and a technical one secondarily, but I love the technical part so mostly I'll focus on that.

    Apple bought PA Semi (Dan Dobberpuhl's company) many years ago - this is the team that is implementing their ARM cores. Apple has an ARM architectural license, dating from the earliest days (I believe going back to Newton), so free reign to do a lot of things. They are trying to balance the need for x86 ABI compatibility with the cost of being attached to Intel for a key component - this is the basic business decision. Apple has staffed that ex-PA team to high heavens - I believe they had 150-200 people at the beginning and have heard rumors of 1000+ now. So they can probably get quite a lot done, but they are still small compared to the teams at Intel doing this stuff.

    Intel (with x86) is biased micro-architecturally to higher performance, and ARM to lower power, but there is nothing at the ISA level that forces that, it's just what they make. In fact, in energy (not power) terms, you can see (fig 10) that energy for the x86 is not way out of line with ARM's A15 - depends on workload but in some cases x86 is more efficient. You can also see in that link at fig 11 that the power-MIPS (BIPS) tradeoffs are really just a line that different micro-architectures sit on at different points. Put another way, I think you can make a big high-power ARM core, but because the ARM licensees are micro-architecturally biased to low-energy, no one does it yet - Apple might be able to, but there's a lot to figure out here. Since that link is an ARM-centric blog, I need to think a little more about where they (ARM) might have inserted bias, but as far as I know it is derived from a pretty fundamental paper from Wisconsin published ~5 years ago. It would also be nice to see those results on A57 or newer, since A15 is rather long in the tooth, but I bet it holds up.

    I liked the comments from @jimv - but what I have read is that Intel's 10nm might be competitive with GF (i.e. Samsung's) 7nm in terms of actual achieved density - and maybe better on some process performance metrics. So even if INTC has bungled some things, they are pretty damn good at this process stuff, maybe good enough. I need to dig more on TSMC's equivalent in that node.

    Last comment, because I have to as a security dude - I think that barring some major miracle, out-of-order execution + superscalar architectures that have dominated the last 20-25 years are well and truly fucked in the face of Spectre - the Meltdown stuff can be basically patched, but I have serious doubts about Spectre. The memory hierarchy we are used to might have to substantially change (can't have shared L2/L3 caches anymore) or maybe have to migrate to massive core-count superscalar but non-speculative ultra-high-clock-rate devices, which poses its own issues in terms of power/energy, or even feasibility.

    I need to find a Sons of Yale Patt board to get more gossip on this stuff - it's fascinating times right now.
     
    #12 cgori, Apr 4, 2018
    Last edited: Apr 5, 2018
  13. jimv

    jimv Member SoSH Member

    Messages:
    972
    did we just become friends?

    Intel's 10nm is roughly equivalent to Samsung/TSMC 7nm in transistor density. Can't remember where I read it but TSMC claims their 7nm roll out is on schedule for 2018. Meanwhile Intel's 10nm rollout has been pushed back into 2018. Given recent track records first to market is unclear

    For years Intel tick-tocked their way into a process advantage. It seems to be gone now and they need to get back up to speed or it will start impacting the bottom line
     
  14. glasspusher

    glasspusher Member SoSH Member

    Messages:
    9,262
    @cgori - I love it when you talk dirty. Reminds me of when I had the time during the PPC/x86 wars.
     
  15. glasspusher

    glasspusher Member SoSH Member

    Messages:
    9,262
  16. Blacken

    Blacken Robespierre in a Cape SoSH Member

    Messages:
    11,983
    Sure, sure, @jimv gets a shout-out and I get NOTHIN'. :(

    Only insofar as execution resources (uarch points of contention) and pipeline length are concerned, as it has been explained to me. And both are pretty well-understood chip facilities--the hard part comes in that the interconnects at practical and usable speeds are more than just rectangle slinging.

    The ISA doesn't really matter so much; my intuition is that turning ARM into a desktop-scale pipelined processor with x86 levels of OOE will turn it into an x86-class power hog.
     
  17. cgori

    cgori Well-Known Member Silver Supporter SoSH Member

    Messages:
    2,102
    Fair enough @Blacken - your second post came in while I was typing my screed into the post editor and was pretty spot-on - I probably should have refreshed the thread before posting. I didn't really have much to say about your first post though.

    As far as what you say - you probably know this, but the pipeline depth is what drives clock frequency (roughly), with some scaling factors that are different between ARM/x86 due to the relative difficulty/ease of implementing the instruction decode. The additional execution units are for allowing more parallelism (ILP = instruction-level parallelism, that's the term in the classic literature). However, Spectre basically puts all that at risk - the more speculation and OOE you do, the more difficult it is to protect sensitive data, so what happens next is going to be quite interesting, to me.

    And your intuition is correct, insofar as the research predicts (which actually would not have been the common wisdom ~15 years ago - that's why that Wisconsin paper was considered interesting). There is nothing inherent about ARM at the ISA level that prevents you from turning it into an OOE monster. Theoretically an ultra-low-power x86 is possible too - I'm not sure if anyone has really thought about the scaling factors and what happens when you detune an x86 design to such a degree, maybe the performance is non-linear / it requires some minimal amount of microarchitectural investment to achieve decent results.

    I'm not sure which interconnects you are referring to - the on-die stuff is pretty well-understood now (crosstalk and routing challenges can be largely mitigated), and everyone has the same off-die problems for memory interconnect since the memory is a standard - unless you are talking about L3 which is more a packaging / pins problem. Intel does have some serious package-design whiz-kids in house for that stuff.

    I just read that gizmodo article - the linked twitter thread is a good read and roughly aligns with my intuition. Some of the other scenarios in the gizmodo column seem... dubious to me.
     
  18. glasspusher

    glasspusher Member SoSH Member

    Messages:
    9,262
    stupid question for those of us who haven't been paying attention for the last 10 years or so- is Intel's x86 still considered a RISC chip with a CISC front end, or am I way off?
     
  19. cgori

    cgori Well-Known Member Silver Supporter SoSH Member

    Messages:
    2,102
    Yes. Everything “normal” is reduced to micro-ops (u-ops) and handled more or less like classical RISC. There is special case handling in microcode for some of the more esoteric x86 instructions.

    When stored, the x86 code is actually denser than ARM (or most RISC), so it needs less I-fetch bandwidth, but it needs more area spent on decoders because the instructions are harder to interpret (or even align, in some cases).
     
  20. cgori

    cgori Well-Known Member Silver Supporter SoSH Member

    Messages:
    2,102
    I sorta hate Quora for most things but this answer I saw today roughly matches what you say (in terms of GF and Intel's marketing terminology, maybe not covering TSMC but I suspect things are similar there too):

    https://www.quora.com/Why-is-Intel-...h-GloFlo-is-readying-7nm-technology-for-Ryzen
     
  21. jimv

    jimv Member SoSH Member

    Messages:
    972
    Thanks for bumping the thread!

    Obscured by their fantastic1st quarter results Intel announced that their 10nm process will be delayed until 2019. Yields are still unacceptably low apparently.

    Meanwhile TSMC has started hvm of its 7nm chips.
     

Share This Page