Press J to jump to the feed. Press question mark to learn the rest of the keyboard shortcuts

ELI5: What does iOS do differently to Android for iPhones to only need 1-2 GB of RAM?

Edit: Should have specified; only need 1-2 GB compared to flagship Android models, which usually have around 6 GB.

88% Upvoted
This thread is archived
New comments cannot be posted and votes cannot be cast
Sort by
20.8k points · 6 months agoGilded4 · edited 6 months ago

Eyy I actually know the answer to this one (game & app developer with low-level expertise in power and memory management - lots of iOS and Android experience and knowledge).

Android was built to run Java applications across any processor - X86, ARM, MIPS, due to decisions made on the early days of Android's development. Android first did this via a virtual-machine (Dalvik), which is like a virtual computer layer between the actual hardware and the software (Java software in Android's case).

Lots of memory was needed to manage this virtual machine and store both the Java byte-code and the processor machine-code as well as store the system needed for translating the Java byte-code into your device's processor machine-code. These days Android uses a Runtime called ART for interpreting (and compiling!) apps - which still needs to sit in a chunk of memory, but doesn't consume nearly as much RAM as the old Dalvik VM did.

Android was also designed to be a multi-tasking platform with background services, so in the early days extra memory was needed for this (but it's less relevant now with iOS having background-tasks).

Android is also big on the garbage-collected memory model - where apps use all the RAM they want and the OS will later free unused memory at a convenient time (when the user isn't looking at the screen is the best time to do this!).

iOS was designed to run Objective-C applications on known hardware, which is an ARM processor. Because Apple has full control of the hardware, they could make the decision to have native machine code (No virtual machine) run directly on the processor. Everything in iOS is lighter-weight in general due to this, so the memory requirements are much lower.

iOS originally didn't have background-tasks as we know them today, so in the early days it could get away with far less RAM than what Android needed. RAM is expensive, so Android devices struggled with not-enough-memory for quite a few years in the early days, with iOS devices happily using 256MB and Android devices struggling with 512MB.

In iOS the memory is managed by the app, rather than a garbage collector. In the old days developers would have to use alloc and dealloc to manage their memory themselves - but now we have automatic reference counting, so there is a mini garbage collection system happening for iOS apps, but it's on an app basis and it's very lightweight and only uses memory for as long as it is actually needed (and with Swift this is even more optimised).

EXTRA (for ages 5+): What does all this mean?

Android's original virtual machine, Dalvik, was built in an era when the industry did not know what CPU architecture would dominate the mobile world (or if one even would). Thus it was designed for X86, ARM and MIPS with room to add future architectures as needed.

The iPhone revolution resulted in the industry moving almost entirely to use the ARM architecture, so Dalvik's compatibility benefits were somewhat lost. More-so, Dalvik was quite battery intensive - once upon a time Android devices had awful battery life (less than a day) and iOS devices could last a couple of days.

Android now uses a new Runtime called Android RunTime (ART). This new runtime is optimised to take advantage of the target processors as much as possible (X86, ARM, MIPS) - and it is a little harder to add new architectures.

ART does a lot differently to Dalvik; it stores the translated Java byte-code as raw machine-code binary for your device. This means apps actually get faster the more you use them as the system slowly translates the app to machine-code. Eventually, only the machine code needs to be stored in memory and the byte-code can be ignored (frees up a lot of RAM). (This is Dalvik, not ART). Art compiles the Java byte-code during the app install (how could I forget this? Google made such a huge deal about it too!) but these days it also uses a JIT interpreter similar to Dalvik to save from lengthy install/optimisation times.

In recent times, Android itself has become far more power aware, and because it runs managed code on its Runtime Android can make power-efficiency decisions across all apps that iOS cannot (as easily). This has resulted in the bizarre situation that most developers thought they'd never see where Android devices now tend to have longer battery life (a few days) than iOS devices - which now last less than a day.

The garbage collected memory of Android and its heavy multi-tasking still consumes a fair amount of memory, these days both iOS and Android are very well optimised for their general usage. The OS tend to use as much memory as it can to make the device run as smoothly as possible and as power-efficient as possible.

Remember task managers on Android? They pretty much aren't needed any more as the OS does a fantastic job on its own. Task killing in general is probably worse for your phone now as it undoes a lot of the spin-up optimisation that is done on specific apps when they are sent to the background. iOS gained task killing for some unknown reason (probably iOS users demanding one be added because Android has one) - but both operating systems can do without this feature now. The feature is kept around because users would complain if these familiar features disappear. I expect in future OS versions the task-killers won't actually do anything and will become a placebo - or it will only reset the app's navigation stack, rather than kills the task entirely.

Original Poster3.4k points · 6 months ago

Ayy, awesome response.

2.3k points · 6 months agoGilded1

Oyy, you got the vowel wrong ;)

  • Eyy I actually know the answer

  • Ayy, awesome response.

Original Poster2.8k points · 6 months agoGilded1 · edited 6 months ago

Uyy my bad

Edit: **(G)**ayy

Iyy forgive you

Ayy lmao

Yyy are we still doing this?

Oh that was brilliant


Only kindergarten graduates will get this reference!

4 more replies

3 more replies

23 points · 6 months ago

Ööööhh I don't know

2 more replies

9 more replies

9 more replies

7 more replies

2 more replies

6 more replies

Shh bby is ok

8 more replies

19 more replies

This was a really interesting read, thanks! Just a question about the "automated task killing" - does this mean I don't really need to swipe apps off the Android Overview* when I'm done with them? (*third button that isn't back or home)

497 points · 6 months ago · edited 6 months ago

does this mean I don't really need to swipe apps off the Android Overview

Exactly this, same deal for iOS as well.

Here's the problem; the operating system made swiping away apps incredibly fun and satisfying, so people do it quite mindlessly when they're just holding their phones. It took me a couple of months to get used to not swiping away apps, and I still do it every now and then, but I've got much better at managing this habit.

It's also become a weird cultural thing. Traditionally, seeing hundreds of web-browser tabs open horrifies people at the amount of inefficient processing and memory consumption that's going on for something no-one is using. People carried this belief over to smart-phone apps, which was kind of true in the old days of Android (but not any more). Even strangers in public seeing my phone screen comment at how many apps I have open - it's that much of a cultural aspect that strangers are willing to comment on someone's private phone-screen out in the open.

I don't think iOS does this (never noticed it do this) iOS does this also; but Android actually clears the tasks itself in the background, so the huge list of apps cleans itself up as the OS needs to (it is making these decisions based on what's best for your phone's battery and performance!).

291 points · 6 months ago

For me it's to control the work space. I rarely swipe away apps unless I know I won't use them in a long time. Helps me switch to other apps quicker

112 points · 6 months ago

This is why I do it. Also, I'm an uber driver so I don't need to be fumbling with anything other than Uber and Google Maps.

here's a trick most android users don't use: tap the 'show open apps' twice to switch to the most recent one. Super helpful when switching between two apps.

Fantastic tip. Long time Android user, had no idea of this. Would have been SUPER helpful thos morning.

They only added this feature in Android 7.0, so you haven't been missing out for too terribly long.

On mine, you can also hold that button to enable 2 app split screen.

That's a native Android 7.0 feature, unlike back in the bad old days where some manufacturers had this and some didn't, and they all used different methods, meaning the extreme majority of apps couldn't be used in split screen, as devs needed to account for several systems.

3 more replies

1 more reply

Omggggggg... The alt tab revolution ... Please tell me this is a new feature since 7 as I will be very very sad if this existed for year...

Introduced it in 7

Ah phew.. being a bit of geek I now feel ok. Love the switch .. woohoo

Well depending on your phone you might have had it longer. Samsung has had it since the S5 I believe, but it wasn’t put into stock Android until 7.0

2 more replies

This has to be one of the most intuitive and useful shortcuts ever added to Android.

I've also been wishing for a while they would do the same thing with Chrome's tab button. I know you can swipe the URL bar, but double tap just feels better.

3 more replies

Great tip! I just played with it for a bit and figured out it's not just a double-click but you can click the square once and take as much time as you need then click a second time and your other app pops up. Nice.

This is why I love reddit. Following a thread can lead to unexpected discoveries!

18 more replies

6 more replies

My problem is an app occasionally glitches out, so I intentionally kill it and relaunch it. And this is across both iOS and Android.

Underrated reason, but totally why I do it too.

It’s also often a quick way to log out of an app (not all of them, but surprisingly many) if it buried the log out function behind too many menus.

6 points · 6 months ago

My Android phone is pretty old now, in Smartphone terms. I actually have to kill apps semi-regularly due to junking my memory and not thinking about it. The fact my 4yo phone still runs despite a cracked screen and numerous dings means I'll probably only replace it later this year, at which point any performance issues should vanish. Until then though, I'll probably need that kill function.

4 more replies

2 more replies

Yeah I'm like you except I do swipe away a lot more liberally. I treat each time I close out of everything as the end of a "session" and don't tend to navigate from there anyway

This is the main reason I do it, too. It's a pain in the arse having to scroll through 20 apps to get back to the one you need to open and god forbid if I actually have to re-click the app icon on my home screen!

On Android, if you double-click the multitask button (or whatever it's called - it's the button that's not home or back that shows all open apps) it immediately switches to your last used app. So if you're in Google Maps, then switch to Uber, double-clicking/tapping would switch back and forth between GMaps and Uber without even showing the list.

13 points · 6 months ago · edited 6 months ago

afaik this is only present on android 7.1 or 7.0 and up


It may be, I honestly have no idea. I just accidentally did it one day lol. But it has been very useful!

2 more replies

4 more replies



9 more replies

3 more replies

78 points · 6 months ago · edited 6 months ago

So when my galaxy S8 is stuttering and I clear all overview and it's not stuttering anymore is just... placebo and confirmation bias? Or does Samsung do something different (worse) in memory management?

The first few apps in the list may be running in the background, swiping away would speed things up in that case. Waiting a few minutes should work as well though

Oh, of course, it takes some time! That slipped my mind... I thought it would be instant, as in the system going "oh, stutters, better free some memory now"

Those stutters could be garbage collection in which case I'd file it under memory management.

2 more replies

I am wondering the same!

It usually clears everything when the phone isn't in use. If you notice the changes when clearing all, but you were using your phone the entire time, that's why the change is so drastic. But say you locked your phone and set it down for 5 minutes, it would likely free up some RAM automatically.

You more than likely have more apps than you realize running in the background taking up processor time. This isn't really an OS problem so much as an app problem. All the OS knows is that the app still says you are "using" it, maybe even specifically telling it to override anything that would "close" the app, and the OS will keep running it until told otherwise.

This is usually the result of poorly cooded apps or the app is saving something it thinks is critical to always be running which you, the user, may not even realize is so important (and may have options to disable that functionality) such as maintaining internet sessions (staying "logged on").

Not at all. Android differentiates between "Foreground" apps and "Running" apps and apps aren't allowed to fool the Android into believing they are actually in foreground

Likely is, but I can't say for sure because it could very well be something the S8 operating system is doing.

It could be a misbehaving app allocating memory too frequently and causing the garbage collector to fire constantly, which causes Android phones to chug, slow down and kills battery life - so perhaps you are indeed killing a misbehaving app.

But aside from badly made apps, you shouldn't kill tasks.

Kill Snapchat though, its a huge performance hog and slows everything down on Android

This. Snapchat is the only app I have to continually swipe away because it locks itself or my phone up.

I think in the early days of Snapchat on Android, it didn't use the camera directly, it took a screen capture of what was in the live viewfinder. Super inefficient, and made videos look like shit.

7 more replies

2 more replies

2 more replies

3 points · 6 months ago

The OS can automatically handle a shit ton of well-behaved apps.

However, a single manufacturer-blessed app with hooks deep into the customized OS can still fuck up the performance of the whole system. Killing it can thus cure stuttering.

11 more replies

Doesn’t swiping away the app on iOS kill the background tasks? I have a waze-like app, which will give warnings when its minimized, but not when its closed.

It’s my understanding GPS, music, and a few other apps operate differently in this scenario. They have an additional level of integration for running in the background.

Yes, but if you swipe the app away while it's playing audio or tracking your location, background or not, the app is dead. Music stops playing, location tracking stopped, no more pushes received.

This applies even to Apple's apps. If you have Apple Music playing in the background and then decide to kill the app, your stream stops playing.

8 points · 6 months ago

Push notifications still work when the app isn’t running

Push notifications on iOS are done by the app developer going through Apple's servers, not sending something to an app running in the background.

1 more reply

1 more reply

I’ve had to kill frozen apps on iOS fairly frequently (once a month) so aside from the navigation aspect I hope they don’t neuter the task manager anytime soon.

1 more reply

One legitimate use case for traditional task managers is troubleshooting. I recently had an issue where something was eating up all of my CPU resources (it even made my phone uncomfortably hot to the touch). I thought it would be no problem to use a task manager and see what was using the CPU...boy was I wrong. Because of recent changes to Android you basically cannot do this anymore.

After messing around with like 3 or 4 different apps and thinking something was seriously wrong or I had malware, I ended up having to enable all the debugging stuff and use adb so I could use the top command. Then I could finally, clearly see the CPU usage of different processes. Ridiculous lol.

Anyway, this app was doing this completely in the background. I never launched it and it would do this. If task managers could still be used, this would have been a very quick and easy thing to resolve. It isn't good for the UX to assume things like "users don't need to manage processes or see CPU usage anymore because the system does it on its own!"...because sometimes it doesn't work as intended and you just need to kill a problematic process.

By the way, it ended up being ES file explorer...I would recommend not using that app anymore if anyone sees this and still has it installed haha

7 points · 6 months ago

Hogwash. It is a perfect system. /s

(Yes, I have done the same steps as you to fix it when it breaks. We need rid of this concept that less control by owners/buyers/users is inherently better.)

I used to really like ES File Explorer, and then it started doing all kinds of shady things.

2 more replies

7 more replies

It probably depends on the launcher or the OEM implementation, but my Android phone never clears up the task list, so the reason I swipe them away or tap on clear all (added back in Android 8) is because there are hundreds of pages of apps and it's just useless to scroll through them.

18 points · 6 months ago · edited 6 months ago

Just to be clear: Is it more of a "You don't need to swipe away apps" or "It's better to not swipe away apps"?

I've been on Android since the early days of task killers so I often times swipe away apps. Hell, right now I've only got Relay for Reddit open and I'll most likely swipe this away once I switch to another app since it starts up fast anyways and I don't need it to be on any specific screen. Is it better for current app performance to just keep every other app in the background, does it just not natter whether I do or don't, or is it better for it to be the only app running?

It is better to not swipe away apps. By swiping them away, you're undoing the memory and state optimisations that were applied to the app, so when you launch it the app needs to do a cold-boot and rebuild all that memory and reload all the stuff it previously had cached.

That's more processor, RAM and flash usage (in some cases with rendering, more GPU usage) - that's more battery usage.

If you don't need to kill it, don't kill it. Definitely don't use a 3rd-party task killer, the OS has a task killer, the 3rd-party one just amplifies the problem even further.

A lot of viruses on Android are distributed via 3rd party "task killer" apps so I tell most people to never install them. It's amazing just how many regular folk™ install 3rd party task killers on their Android devices!

7 more replies

3 more replies

3 points · 6 months ago

Android actually clears the tasks itself in the background, so the huge list of apps cleans itself up as the OS needs to (it is making these decisions based on what's best for your phone's battery and performance!).

Yeah, I've noticed that... and I'd wish it'd stop. I mean okay, maybe kill the app if you think that's best, but don't remove it from the list without my consent! If I'm done using it, I'll remove it myself, thank you very much. I hate having to go through my home screen to reopen something I'm using regularly just because my phone thinks I took a bit too long since I last used that app. :/

1 more reply

This is interesting. I just got an iPhone X and to close apps you swipe up half way to get the app switcher, but to actually close the apps you have to long press for 1-2 seconds then you can swipe up to close them.

I wonder if Apple added this purposely to remove the 'fun' of closing apps since it isn't needed.

Oho I did not know this! Yes I very much expect this to be the case! If you absolutely need to kill a misbehaving app then you'll purposefully do it without trashing all the OS optimisation that had been going on. Bravo Apple.

1 more reply

2 more replies

3 points · 6 months ago · edited 6 months ago

What version of Android started this? I'm stuck with 6.01 for now, and I don't think it's doing this - I have the same programs I opened 2 days ago still in the queue, still sucking up battery, too.

Edit: Also - you mention the task list is going to be just placebo some day - already is for some apps like Spotify.

1 more reply

62 more replies

Comment deleted6 months ago(4 children)

4 more replies

4 more replies

155 points · 6 months ago

You need to kill a task if an app freezes or bugs in some cases. It’s very useful :)

63 points · 6 months ago

Like when Netflix forgets that there's a Chromecast on the network O_o


3 more replies

Is that an issue with Netflix or the Chromecast?

1 more reply

5 more replies

I do admit, they are still useful for these circumstances. Personally I see these situations happen less and less frequently on my devices, but I imagine there's people out there that depend on buggy apps that cause these problems.

The issue is that everyone is in the habit of killing every single app on their device whenever they feel like it. I wrote about this here;

I see the issue and I don't have the habit of killing apps but we definitely need the ability to kill an app. I would have to restart my phone whenever an app freezes or if there is some weird layout bug, way too inconvenient, this also happens to mainstream apps on iOS and it will also happen in the future with new versions of iOS

Yeah idk - I have to kill frozen apps several times a day and I wouldn’t say that I’m relying on obscure apps or anything. Being able to force quit an app seems pretty critical to me even if it’s irrelevant for memory and power management. Even if I only had to do it once a month though, when you need it, it’s a hell of a lot more convenient than having to restart your entire device.

That said, it’s super interesting to know that doing so will harm memory management and battery life. Your explanation of how this stuff works was a really eye opening - thanks!!

Side question: do you foresee android apps starting to run away from IOS in terms of performance? My experience is that for a long time, apps that ran great on IOS were typically sluggish and buggy on Android. Will that change over time with ART or is that not related to the stuff you are talking about?

2 more replies

but I imagine there's people out there that depend on buggy apps that cause these problems.

The Youtube app on iOS used to bugs out if while you are writing a comment, you Home button out of the app, and open a video from Safari. You can't post your comment because it's a different video now and you can't close the "comment dialog box" properly.

When big developers are making such mistakes ...

2 more replies

On android it doesn't seem like apps are killed anymore...If something freezes or is bugged up I often have to go into the apps part of setting and force stop the app.

1 more reply

This is the number one reason the feature should stay.

Or if something is using a lot of cpu in the background. I used to wake up in the middle of the night to a blistering hot phone because some stupid app wouldn't close up, thankfully that was the snapdragon 800 days and now the thermals are a lot better even with intense apps.

2 more replies

1 more reply

27 points · 6 months ago

I would guess that the task killer is still there in case the app stops running properly. You can just kill it and restart the app without restarting the whole phone.

Yes, task killers are still 100% needed. Many times an app will malfunction without alerting the OS because it isn't using too much CPU or RAM. Apps glitch all the time in other app-specific ways that will never trigger the OS to kill it.

3 more replies

1 more reply

iOS gained task killing for some unknown reason (probably iOS users demanding one be added because Android has one) - but both operating systems can do without this feature now.

I'm not so sure about that. I'm an iOS user and I regularly have to kill Pandora and sometimes Spotify for them to work properly. Something about the way musical apps integrate with whatever is left of the iPod functionality I guess, where the music app fails to "grab ahold" of the master play/pause controls (since only one app at a time gets to use them) and none of the play/pause/ff buttons in the app work at all.

If I force close the app and restart it, it works fine. So as annoying as that kind of thing is, if I didn't have that functionality I'd really be screwed.

I'm an iOS user and I regularly have to kill Pandora and sometimes Spotify for them to work properly.

I have an S8 and still have to do this.

Damn this happens to me all the time, lock Screen controls not working etc.

1 more reply

8 more replies

ART does a lot differently to Dalvik; it stores the translated Java byte-code as raw machine-code binary for your device. This means apps actually get faster the more you use them as the system slowly translates the app to machine-code.

It was Dalvik that did that, tracing just-in-time compilation. ART compiles entire apps into native machine code on installation.

They've gone full circle. Android 7.0 added JIT to complement the ahead of time compilation.

3 more replies

Also mobile game dev here. Another interesting point is that iOS doesn’t do paging to the hard drive. The moment an application asks for more memory than the device can provide at the moment, the application is immediately killed (similar to what I believe happens on gaming consoles), and if this happens during app review, your application will be rejected. This forces developers to be very careful and efficient with their memory use.

57 points · 6 months ago

Nice writeup. I still find task killing in iOS useful. Sometimes you will end up in a place in an app where getting back to the starting point is cumbersome, or the app will reach a state where it's not usable and starting it anew is the best thing you can do. This especially goes for poorly written apps that get stuck on network calls and won't proceed or go back until the server responds.

Another thing I'd add is the more stringent way in which iOS handles background services. It is (to the best of my knowledge, and after doing this for the past 4 years I should know) still impossible to create background services in iOS that run full time, unless they are media playing or newsstand apps. So basically your ordinary iOS app will get some slice of time to perform it's background work and then be put to sleep, while with Android one can (and many apps do) create background services that do something all the time and you end up with more strain on both memory and CPU.

while with Android one can (and many apps do) create background services that do something all the time and you end up with more strain on both memory and CPU

Right now Google is at war with this on Android and they're undoing some of these decisions to pull it inline with iOS behaviour. This is a tiny part of the larger "virtual machine lets Android optimise across all apps easily" that I mentioned.

Originally with Android you could make a task run every N seconds, and you can make N 1, but then it became every N minutes, and now it's become "you can request it every N minutes, but we can't guarantee it will run every N minutes, we'll just tell you when it runs".

This is a nightmare for people trying to develop unique, bleeding-edge apps that have very unique background behaviour, but it's been very beneficial for the mobile devices themselves. I've seen loads of complaints from developers about losing this freedom, my studio actually had to cancel an entire project due to this Android behaviour.

It would be awesome if background tasks would become a permission, like access to files or camera.

For advanced users, yes. For the least savvy 40% or so it would be a disaster.

I'd say it's at least 85% of people, the general population (including your mother), do not know how to make decisions regarding electronics.

1 more reply

Don't know why we aren't there yet tbh.

2 more replies

4 points · 6 months ago

Then people agree to let all the random shitty games they download have full background permission (probably to track them), the phone runs like shit, and they blame google, the phone manufacturer, or even the carrier.

2 more replies

1 more reply

Can't you still have the same functionality with a foreground service, just that you have to notify the user with a notification while it is running?

This is indeed the case, but when you have a foreground notification users tend to look at your app angrily and decide it's using too much battery doing apparently nothing, even if you have written the most power efficient, highly optimised code known to mankind that is better than the alternative system.

1 more reply

2 more replies

2 more replies

19 points · 6 months ago

woosh hear that? That was the sound of all of that going over my head.

It's ok. Seems like the short version is "Android makes its operating system work on any computer while Apple only needs to it work on their own special computer. This means Android needs to be prepared for anything, and that needs more space."

Can someone ELI5 this response to me?

236 points · 6 months ago · edited 6 months ago

Apple can only drive a Prius, while Android can drive anything with wheels. That means Apple gets really good at driving its Prius. It gets better mileage out of a Prius than Android ever could and almost never gets in an accident.

Android isn't particularly good at driving any specific vehicle. But, Android could drive an 18 wheeler or a moped or an Abrams depending on what it needs to do. But, since it doesn't only drive any of those vehicles, it's more likely to get into an accident.

However, everyone decided they wanted Android to drive a Prius. It's been driving the Prius a lot. It's almost as good at driving a Prius as Apple. But, it also keeps in mind that it might want to drive something else. So, even while it's almost exclusively driving its Prius, in the back of its mind, it remembers how to drive an 18 wheeler and a moped.

edit: it's/its.

Amazing analogy

Holy shit thank you so much. Now I get it.

The real ELI5 is always on the comments.

7 more replies

It's magic.

iOS is a star shaped object meant for a star-shaped hole so it's small and exact. Android is designed to fit as many holes as possible so it's a big lump of Play-Doh. They used to be anyway. Back in the beginnings.

2 more replies

18 more replies

Huh, any idea why the original VM was named Dalvik? Dalvik is a little town on the north coast of Iceland. I went whale watching out of it a couple months ago. Crazy to see it pop up in a discussion of phone memory usage.

Dalvik is open-source software, originally written by Dan Bornstein, who named it after the fishing village of Dalvík in Eyjafjörður, Iceland.

So it was definitely named after the town, interesting. Unfortunately the dev doesn't have his own wikipedia page so I'm not sure if he was born there or something? Name doesn't sound Icelandic. Just DM'ed him on twitter I hope he responds.

3 points · 6 months ago

No, it was a prank.

3 more replies

This was the droid we were looking for!

8 points · 6 months ago

iOS gained task killing for some unknown reason (probably iOS users demanding one be added because Android has one)

Every so often and app will shit itself, and the only way to get it unfucked is to kill it and reopen it.

Today's the day I realize I'm not as smart as a five year old.

This was like a ELI-have-degree-in-software-engineering

It's funny, in 2006/7 it made a lot of sense to go with a close to the metal approach for Apple, and it gave them a significant advantage for years in the fluidity of devices, battery life, and resource allocation. Now, Google's decision to go JVM looks like the better call. Easier to develop (though there are a gazillion badass Objective-C guys out there at this point), and the price of hardware has come down so much that aside from battery life, you don't have to watch system resources to the same degree you did back then and processors have gotten so fast a modern Android phone (at least the pixels) is every bit as responsive in normal use as an iPhone.

It's been really interesting watching two fundamentally different technology approaches evolve into near feature and performance parity over the years.

It definitely surprised me to see things play out this way. I always thought "to the metal" was the best thing in any situation, but when it comes to an entire platform you have no idea what app developers are doing, they're likely making the easy choices, not the smart choices, so being able to rein them all in and whip their performance into shape with a incredibly well designed virtual machine does appear to be the best choice - and I don't think anyone would have been able to predict this.

I still won't say it's the "better call" on my own personal merits, but "appear to be the best choice" is an alright description. There's still lots of issues Android has that need resolving, same deal with iOS, so the tables could turn once again.

Yeah "better call" probably isn't the best way to put it. More so that if I were inventing a mobile platform from scratch in 2018, it'd definitely be using a managed language. The hardware has just gotten so dang powerful that our limiting reactant really is battery life. That'd be especially so if you lived in a world that didn't have legions of Objective-C devs out there like 2007. Anywho, they're both marvels. It'll be interesting to see where they're at in five years.

Awesome ELI5 btw.

If I were designing from scratch in 2018 I don't think I would use a managed language. An operating system can do the same power management calls that the Android VM can do, just that it needs to be added early on or it will be difficult to add them in the future (the situation iOS is currently in).

The OS APIs could be designed from day 1 to be power efficient and the OS can still treat a compiled language as if it is in a strict sandbox of "don't use too much power" with power efficiency focused OS APIs, just needs to be designed from day 1. However, this could still later prove to be the bad choice 4 years down the line.

4 more replies

Comment deleted6 months ago(1 child)

1 more reply

5 more replies

14 points · 6 months ago

though there are a gazillion badass Objective-C guys out there at this point


  • the new-hotness of Swift is a big selling point to the so-called "kids" who like being early

  • iOS is still the platform you work on first if your goal is to make money

have gotten so fast a modern Android phone (at least the pixels) is every bit as responsive in normal use as an iPhone

This is absolutely true. I've been an iPhone user (and Apple fan for much longer) because smooth UX has (historically) been priority #1 for Apple, and that spoke to me as a developer too.

I've been an iOS and Android dev for 5+ years, and the S8 and Note 8 I've used for testing were the first Android devices to make consider switching platforms. They are very nice. But then I download some popular Android apps and get kinda grossed out.

The Android dev community unfortunately does not value UX and smoothness quite as much as I'd like it to.

I'm with you on all points. Android is still lagging behind a bit when it comes to general app quality and the ease of creating a pretty UI. Material was a nice step forward, and I'd expect them to focus on improving this in the next couple years.

I'm rocking an iPad and a Pixel 2, and I probably slightly prefer Android to iOS, but they're both great. The big selling point to me with Android is that you can tinker with the OS. It's usually temporary, and there's lots of janky shit out there, but I have fun throwing random roms on my phone and seeing how they run.

4 more replies

11 more replies

Thank you so much, awesome answer!

In the old days developers would have to use alloc and dealloc to manage their memory themselves


3 more replies

Can some (this subs name) this reply

Remember toggling art on when it was a development option. Was instant gratification

414 more replies

785 points · 6 months ago

There are several reasons relating to the varying use cases as others have described, but the main reason is this: Android uses a form of automatic memory management that uses garbage collection, while iOS uses a more manual form of memory management. Garbage collection works better if there is always a good chunk of memory free, so the garbage collector doesn't have to run so often.

The reason to use garbage collection is because it saves the programmer from manually having to managed memory. Memory management is tricky, and if you make a mistake, you might begin to leak memory (memory consumption goes up slowly) or create a security hole. Recent versions of iOS use something called automated reference counting, which means that the compiler (technically the pre-processor) will figure the correct memory management automatically. This means that the workload of managing memory moves from the phone to the computer of the developer that compiles the software.

The reason for this difference is historical. Android uses the Dalvik runtime, which borrows from Java, while iOS uses Objective-C and now Swift, which had a simple manual memory management system (manual reference counting). Apple used Objective-C because that is what they use in their own OS - Google used a Java analogue because it is a modern safe language that was widely by the time they launched Android, and so was easy for developers to learn.

Android uses the Dalvik runtime,

I thought they switched to ART. Or is that basically the same thing?

They did indeed switch to ART back in 5.0 IIRC

1 more reply

Yes, but ART is also basically a Java VM, and so it handles garbage collection in a similar way. The vast majority of Android apps did not need to be recompiled to work on ART.

41 points · 6 months ago

IIRC ART was much more developed, stable and optimized compared to Dalvik

It's the official "performance boosting thing" they developed to close this issue on the AOSP issue tracker:

1 more reply

I know you said "basically", but I want to clarify that ART is not a VM, it compiles apps into native machine code on installation. It has garbage collection though, but it's much better than the Dalvik one.

Yes they are all Java based.

1 more reply

26 points · 6 months ago · edited 6 months ago

ART is the same in the sense that it's a virtual machine with garbage collection, however it's far better than Dalvik as it's more optimised for mobile devices.

ART slowly compiles the Java byte-code in processor machine-code as features of an app are used (this is Dalvik, my bad) ART compiles the Java byte-code and it stores this translated binary so the next time you run the app the high-performance, memory-optimised, power-optimised machine-code version will be ran rather than the original Java byte-code. This makes ART a bit more difficult to port to future architectures compared to Dalvik, but the mobile world has settled on ARM for the time being so it's little concern.

Dalvik collects garbage when an app is using too much memory (hits a ceiling, garbage is collected and the ceiling could be raised). ART has a smarter garbage collector, which will garbage collect memory when a convenient time arises. What is that convenient time? Maybe when your phone screen turns off, or when you navigate away from the app and are unlikely to return to it for a few minutes, maybe it's before VSYNC when there's still time to do processing, or perhaps it's never because the app keeps on re-allocating similar objects so ART can reuse blocks of memory.

The ideal time to garbage collect is when the user isn't looking at the device - so in the future the "convenient time" could be whenever you blink your eyes!

ART slowly compiles the Java byte-code in processor machine-code as features of an app are used

That was Dalvik, it's called tracing just-in-time compilation (JIT), ART uses ahead-of-time compilation (AOT) to compile entire apps to native machine code.

Oh yes, thank you for the correction you're completely right. I'll update the post.

Dex2Oat lol

1 more reply

1 more reply


270 points · 6 months ago · edited 6 months ago

let's say you're a chef and you put stuff you need to cook in the fridge.
In android, you hired someone to clear out the fridge for you. But because the hired person doesn't know exactly when you might still need the food in the fridge, they clean it less often. To accommodate for this, you want a bigger fridge.
In iOS, you clean the fridge yourself, every time you finish cooking. Since your fridge really only ever has the amount of food for 1 cooked meal, it can be pretty small.
Now replace you with the CPU / OS, fridge with RAM and food / meal as memory needed by the software
edit: for those who are interested in an even deeper understanding, the above is not 100% correct. If you don't care / can't understand beyond ^, stop here. The hired person is garbage collection, but that's actually part of the OS. The programmers are the ones that have to tell the OS to clear out the fridge on iOS, whereas the OS takes care of it for you on Android, but you hit that problem where it isn't the most efficient. On garbage collectors I've worked with, they usually clean it out when all references of something is gone. Basically when it's no longer used by any prorgamme, it's removed.

43 points · 6 months ago

Wow this is an awesome analogy.

27 points · 6 months ago

Ok now Explain like I’m a programmer with decades of experience.

Java is being Java as usual.

Or as it's normally written, $(*%ing JAVA!!!!!

2 more replies

1 more reply

Ram no need no more? Garbage man free ram!

9 points · 6 months ago

Why say lot word when few word do trick?

4 more replies

*slap* You should know this stuff already.

8 points · 6 months ago

Android uses the JVM, iOS uses languages with small runtimes and reference counting. Neither have the balls for manual memory management.

Edit: I guess Android doesn't use the JVM but their own virtual machine.

No human being should write high-end apps in a language that requires manual memory management.

2 more replies

1 more reply

That was beautiful.

6 points · 6 months ago · edited 6 months ago

If you want to keep going with the fridge analogy (which is great, btw), we can explain a few different types of garbage collection:

Stop-and-copy: you have two refrigerators (left and right). Every so often, your cleaning person has all the cooks stop what they're doing, looks to see what they still need, then puts that in the same spot in the other fridge, tells everyone where the new stuff is, then cleans out the old fridge while the cooks get back to work. Smarter cleaners can do this when the head chef stops the kitchen between shifts.

Ref counting: you have one refrigerator. Every time a cook starts using something, they put a sticky note on the batch in the fridge. When they're done they pull the sticky note. The cleaner watches for stuff that doesn't have a sticky note anymore. This seems simple, but it means every cook is spending a little time on a lot of sticky notes when they could be cooking. Sure does make the cleaner's job easy, though.

Generational: you have six refrigerators. Cooks put new stuff in the rightmost fridge. When a fridge starts getting full, the cleaner has everyone stop what they're doing and checks to see if anyone is using stuff in that fridge. Everything that's being used from that fridge gets moved one fridge to the left, and the fridge is cleaned out very fast. Once stuff gets to the leftmost fridge, it is permanent and is probably never checked again. The good news here is that since most stuff that's put in the first fridge isn't used for very long, and anything that makes it to at least the third fridge is very unlikely to be garbage, you actually don't spend much time checking to see if anyone is still using stuff.

There's fancier versions of this, like your cleaner may go get you a bigger fridge if it notices you're running out of room or if your having to collect garbage frequently. There's really fancy versions of this that don't require you to stop the kitchen.

3 points · 6 months ago

The programmers are the ones that have to tell the OS to clear out the fridge on iOS, whereas the OS takes care of it for you on Android,

Ah. The chef-team (application) vs the generic kitchen cleaning staff (operating system).

Nice analogy. Very well done...

4 more replies

So, if used memory is garbage, then the Android way of handling used memory that is no longer needed is just like the garbage man. Periodically, the garbage man comes around and collects all the data stored in memory that's no longer needed. You need a large enough dumpster to hold the data until the garbage man comes.

Apple's way of handling this is more like if you took your own garbage to the dump when needed. You can use a smaller dumpster since you don't have to wait for the garbage man to come, but it's more work and planning required so the dumpster doesn't overflow.

Sure, it's not 100% accurate, but hopefully that helps.

4 more replies

1 more reply

56 points · 6 months ago

Objective-C has added automatic reference counting long ago. Using this, you don't need a garbage collector to run periodically. Instead, memory is released as the last reference to it is deleted.

ARC is great. Basically as easy as garbage collection from a programmer’s perspective. And basically as efficient as manual malloc/free during runtime.

1 more reply

8 points · 6 months ago

Built-in shared_ptrs?

Yes, and the compiler automatically inserts them. Basically you write your code without worrying about lifetimes (for the most part) and the preprocessor/compiler analyzes the code and wraps any variables used by multiple entities in a shared_ptr-like wrapper. Most other things get wrapped in a unique_ptr-like wrapper if I understand correctly.

Can you end up in situations where you’re dereferencing a nullptr (or whatever the analogue is in iOS)? Or is the preprocessor good enough to avoid that class of issues entirely?

I've never encountered it but it's still possible; just unlikely. It's a combination of the preprocessor and Apple's APIs that work together to avoid it. A poorly-written function might be able to fool the preprocessor and allow for a nullptr dereference. In Objective-C at least a nullptr is the same as it is in C and C++; all three languages treat raw pointers the same, it's just that in Objective-C you're rarely working with raw pointers. I'm not sure how it works in Swift but it's probably similar.

In Objective-C at least a nullptr is the same as it is in C and C++;

Technically the same, though sending messages to nil is a NO-OP in Obj-C vs a null pointer exception in C or C++.

1 more reply

18 more replies

This is basically wrong. You’re talking about the garbage collector in the JVM versus Objective-C’s manual or automatic retain/release. Those are important when you are examining steady state and peak memory usages of individual apps and daemons on each system. But they do not really come into play when it comes to how the operating system manages resources at a macro level.

Both kernels are, for example, written in C. Many of the daemons each operating system are written in C. The JVM and ObjC simply don’t matter to those.

Android requires more memory for a few reasons:

  1. It has to bring the JVM into memory for apps. That is a very large runtime when compared to ObjC or Swift.

  2. Android runs on more hardware configurations, and so it can’t make assumptions about hardware invariants that iOS may be able to.

  3. Vendors may have their own Android forks that are loaded up with additional features or software, contributing to bloat over a baseline “pure” Android.

  4. iOS has a pretty aggressive amount of OS-level memory management features, including the ability to kill almost any daemon when it’s gone idle to reclaim resources, VM compression, complete management of third-party app lifecycle, etc. Also it doesn’t have anonymous memory swap, which is a forcing function for the OS to live within a certain budget. (Dunno if this is true of Android.) These contribute to iOS having a low steady state memory requirement relative to the functionality it implements.

5 more replies

33 more replies

743 points · 6 months agoGilded1 · edited 6 months ago

I believe the true answer to this question is fascinating, and that it's actually just one piece in a bigger scenario (playing out right now that started in 1993) and that all of us are about to witness a transformation in the personal PC space that a lot of people wont see coming.

First, lets focus on why the history of apple as a company put them in the position they're in today where they build everything in-house and it seems to work so well for them. Apple has the upper hand here when it comes to optimizing the software and hardware in a way that Google can never have, because Apple is calling all the shots when it comes to OS, CPU design, and device design. Google doesn't have that luxury.

Google builds one piece of the handset (OS) and have to make it work in tandem with many other companies like Samsung, Qualcomm and Intel (for the radio). This is a very difficult task and is why OEMs like Samsung often have to also contribute a lot on the software side when building something like the S8.

The reason Apple is in this position (where it can control the entire hardware/software creation of the device) is twofold. On the one hand Steve Jobs always wanted to control the software and hardware aspects of the Macintosh because he saw that it made it easier to provide users with better UX this way, and also the more control he could exert over the users the better.

The other fascinating and often overlooked but incredibly important reason why Apple can do what they do with the iPhone has to do with IBM, PowerPCs and a little known company called P.A. Semi. You see, up until around 2006 Apple used PowerPC CPUs (by IBM) instead of x86 (by Intel). It is believed by most that Apple switched to Intel because Intel made more powerful chips that consumed less power. This isn't actually completely true. IBM is who made PowerPC design/chips and by the time 2006 rolled around IBM had sold off thinkpad, OS/2 had failed and they were almost fully out of the consumer space. IBM was completely focused on making large power hungry server class CPUs and here was Apple demanding small power efficient PowerPC CPUs. IBM had no incentive towards making such a CPU and it got so bad with Apple waiting on IBM that they ended up skipping an entire generation of PowerBooks (G5).

Enter P.A. Semi. A "startup for CPU design" if there ever was one. This team seemingly came out of nowhere and created a series of chips called PWRficient. As IBM dragged its feet, this startup took the PowerPC specification and designed a beautifully fast, small and energy efficient PowerPC chip. In many cases it was far better than what Intel had going for them and it was wildly successful to the point where the US military still uses them in some places today. Anyway, their PowerPC processor was exactly what Apple was looking for, which came at a time when IBM had basically abandoned them, and Apple NEEDED this very bad.

So what did Apple do? they bought P.A. Semi. They bought the company. So at this point if you're still reading my giant block of text you're probably wondering but if Apple bought the company who could solve their PowerPC problem, why did they still switch to Intel? And that's where the story goes from just interesting to fascinating: Apple immediately put the team they had just bought in charge of creating the CPUs for the iphone. See, people always ask when is Apple going to abandon the Mac? well the real answer is that they abandoned the Mac when they switched to Intel, because this was the exact time when they not only gave up but abandoned a perfect solution to the Mac's CPU problem, and where they instead re-purposed that solution to make sure that they never have a CPU problem with the iPhone.

So what lessons did Apple learn here? That if a critical component to your device (i.e. CPU) is dependent on another company then it can throw your entire timeline off track and cost you millions in revenue lost (the powerbook g5 that never happened). Apple was smart enough to know that if this was a problem for the Mac it could also be a problem for the iPhone. When a solution arrived for the Mac they instead applied it to the iPhone instead, to make sure there was never a problem.

And that team from P.A. Semi has designed Apples ARM CPUs for the iPhone ever since, and they're at least two generations ahead of the chips Android devices generally use, because they were first to market with a 64bit architecture, and first to allow the use of "big" and "little" cores simultaneously.

And as for Mac users? Well, the switch to Intel allowed the Mac to keep living, but MacOS now comes second to iOS development, and new Mac hardware is quite rare. Apple has announced plans for app development that is cross compatible with iOS and MacOS. Apple has started shipping new Macs along with a second ARM CPU. The iPad Pro continues to gain MacOS like features such as the dock, file manager, multi-window/split support. All signs point to MacOS being on life support. When Steve Jobs introduced MacOS he said it was the OS we would all be using for the next 20 years, and guess what? Time's almost up.

And the irony of it all is that history has now repeated: Apple now has the same problem they had with IBM, but now with Intel. Intel is now failing to produce chips that are small enough and that run cool enough. Apple will have to redesign the internals of the MacBook to support 8th gen chips due to changes intel made. Even the spectre/meltdown bug. The Mac is yet again dependent on a CPU manufacture in a way that harms Apple.

So yes, the iPhone is something to marvel at in terms of its performance. You might be thinking Android is the big loser here, but really it's the Mac and it's Intel. I believe we at the cusp of an event that will make the IBM/PowerPC drama seem small. In five years from now we likely wont even recognize what MacOS and Windows are anymore, and Intel will either exit from the portable consumer space, or they will have to go through an entire micro-architectural re-design and rescue themselves as they did in '93 with the Pentium.

In '93 Intel almost got destroyed because their CISC chips weren't as powerful as RISC chips such as PowerPC. Intel then released Pentium, which is essentially a RISC chip (think PowerPC or ARM) but with a heavy duty translation layer bolted on top to support CISC instructions that every Windows PC required. This rescued Intel up until right now but the industry has evolved and Intel's "fix" in '93 is now their biggest problem for two reasons: 1) they physically can't compete speed/heat/size with ARM now because they have to drag along this CISC translation layer that ARM doesn't need; and 2) Windows is about to introduce native ARM support with a software translation layer. Remember, Microsoft has the same CPU dependency problem that Apple has. And Microsoft's software solution allows them to throw away Intel for something better. Users wont notice the switch to ARM because it's transparent, but they will notice the 20 hours of battery life and thinner devices they get in the future once Intel is gone.

343 points · 6 months ago · edited 6 months ago

Your post has some good information, but it doesn't actually answer the OP which long predates Apple using in house designs for their CPU. The answer to that is in various other posts, having to do with garbage collection versus reference counting.

However you do have some revisionist history in your post. Firstly, PowerPC was partly owned by Apple (along with Motorola and IBM. As a ironic aside, ARM was cofounded by Apple, originally to design the processor for the much maligned Newton). Apple wasn't dependent on anyone, but the simple reality is that Intel left PowerPC behind -- Intel was being financed by the vast majority of the market, with a massive R&D budget, while PowerPC was scraping by with a tiny market. It couldn't compete. Apple brought processor design in house for the iOS devices when they had so many billions of profits they could eat all the R&D necessary (and no longer had to pool, effectively, with Samsung). They have done remarkable work, and have fantastic single core efficiency, but it's disingenuous to say it's two generations ahead when so many other chips (e.g. Exynos) are edging it. As mobile exploded, and money started pouring into mobile designs, those long derelict ARM cores got dramatically more competitive.

Intel, it is notable, is in a tough situation where their biggest worry is competing with themselves -- their cash cow of x86 is in the office and data center, so they've always crippled their mobile offerings: The former makes them hundreds of dollars per chip, while the latter is dollars per chip at best. This is the same reason why nvidia pulled back on some fantastic mobile chips -- market leading chips -- because they make a shitload more money selling the X1 to Nintendo than to a mobile maker.

Secondly, Apple most certainly wasn't remotely first with Big.little, and it's even iffy to say that they were first with 64 bits. ARMv8 was introduced as a reference design two years early (ergo, any ARM maker could start fabbing out chips if there was a market). Android as a project wasn't prioritizing 64-bit, so makers simply didn't move to hardware that the OS couldn't support.

Your ending bit on Intel versus ARM is just ridiculous, and reads like an article from the 1980s. It is wrong on every level in a modern context. The labels CISC and RISC don't even make any sense any more.

58 points · 6 months ago · edited 6 months ago

PowerPC was partly owned by Apple

Yes but IBM was the one actively developing the architecture at the time and were going too slow for apple's tastes. The promised 3Ghz G5's never happened and IBM couldn't get the POWER series running cool enough to even consider continuing in the powerbook. This was a big deal at the time and IBM certainly did screw up Apple's timeline.

while PowerPC was scraping by with a tiny market. It couldn't compete.

This last part simply isn't true. The PA6T was incredibly promising and was even developed outside of AIM.

Apple brought processor design in house for the iOS devices when they had so many billions of profits they could eat all the R&D necessary.

Apple was talking to P.A. Semi several years before buying them and the consensus was that Apple would ditch AIM and stay with PPC going with the PWRficient series. This would have supported multiple cores which arguably ran cooler than competitive intel chips. Instead, Apple realized early on that their future was in the iPhone and not the Mac. They bought the company, axed R&D into PWRficient and moved it to ARM.

Android as a project wasn't prioritizing 64-bit, so many makers simply didn't move to hardware that the OS couldn't support.

doesn't that further show how, when you control all the modules encompassing a product, you can coordinate the sw and hw together and make a big transition like from 32bit -> 64bit easier and faster than your competition?

Your ending bit on Intel versus ARM is just ridiculous, and reads like an article from the 1980s. It is wrong on every level.

You really don't think Intel is worried at all that Microsoft has Windows 10 on ARM in addition to a transparent rosetta like runtime transpiler? We're not talking about the NT kernel simply having support for ARM, this clearly goes far beyond that. You don't think they're worried that Apple is about to cancel all future contracts with Intel for the Mac altogether? Intel has enough trouble keeping the thermals in their desktop class chips in line (go look at the thermal spikes people report with the 7700k for example). You really think in the portable direction Apple (and the industry) is headed that Intel has a future without another massive re-design?

The labels CISC and RISC don't even make any sense any more.

How can you say that? Your Intel CPU is running a RISC like core and translating x86 instructions own to uops that execute on that core. Those x86 instructions were created pre-P6 microarchitecture for true CISC chips. The legacy x86 instruction set intended for true CISC chips was kept for compatibility even after all these years, but now software has caught up and Intel is left holding the bag for something nobody wants anymore.

This last part simply isn't true.

It's completely true. PowerPC was totally eclipsed by Intel. PWRficient was singularly targeted at power efficiency, and had little market because that just wasn't enough of a draw.

doesn't that further show how

Yes, it absolutely does. I don't disagree with that at all. Apple has remarkable control, and as a developer who has targeted both Android and Apple, I vastly prefer Apple devices. I'm glad there's competition though.

You really don't think Intel is worried at all that Microsoft has Windows 10 on ARM in addition to a transparent rosetta like runtime transpiler?

Transcoding will always be somewhat second tier, and I doubt Intel is all that worried. Intel is likely worried about Apple, but compared to their data center cash cow Apple is small, small, tiny potatoes. Again, everything Intel does has to be balanced against competing with themselves.

ARM has been purported to be ready to take over the data center for decades now. When it actually gets to data center scale, though, the benefits and efficiency just dissolve.

Your Intel CPU is running a RISC like core and translating x86 instructions own to uops that execute on that core.

Instruction sets are largely interchangeable. On both ARM and Intel chips that instruction set is converted to microcode that can contain multiple microoperations, perfectly optimized for the processor. That whole debate hasn't been relevant for years.

PWRficient was singularly targeted at power efficiency, and had little market because that just wasn't enough of a draw.

No it had little market because it's customer was the US government and then it lured Apple in but instead of being their customer Apple bought them. See here just how much punch this startup had. Apple was smart to acquire them, you need only look toward their current SoC performance and power consumption to see how well it worked out.

I'm glad there's competition though.

Same here. The mobile space was not as exciting when the high end market was dominated by Windows CE and PalmOS.

Transcoding will always be somewhat second tier

Performance in rosetta was fantastic and we're a decade later now. Just look at how well JIT compilation performs in v8 or you can even draw parallels here to how the JVM works. Throw in a cache layer so compilation only has to happen once, and you have near native performance. I actually trust Microsoft not to drop the ball here because in a sense they need this to work, because it will enable Windows to compete with Android and iOS in a way that Windows Mobile, CE and RT never could.

but compared to their data center cash cow Apple is small, small, tiny potatoes

Which is exactly why I am comparing IBM and Intel. IBM also ended up in a position where they had more incentive to pursue enterprise development. IBM transitioned to enterprise and away from the consumer space in a strong way and I think that's the easy out for Intel here too because they're also positioned to do the same thing.

That whole debate hasn't been relevant for years.

And in my original comment I only bring it up when discussing what happened to Intel in 1993. I don't think we're in disagreement here.

Performance in rosetta was fantastic and we're a decade later now

Sure. This area isn't new. While seldom heralded, Intel built a transcoder for Android that allowed ARM native code to run on x86, and in benchmarks was very comparable with x86 native code. I mean I ran these benchmarks myself, building standard benchmark code in LLVM targeting either ARM, running through the transcoder, or x86, with all optimizations, and the result was very comparable. Because the instruction set is effectively high level. I'm saying that Intel will make some noise, but ARM hasn't had a power efficiency advantage for years. Microsoft is just trying part two of the Windows 8 thing.

No it had little market because it's customer was the US government

This is a strange comment. They were an open market builder, starting with the POWER core (which, like ARM, has reference designs). No one was interested. Apple bought them just to get the people.

And in my original comment I only bring it up when discussing what happened to Intel in 1993.

You were just saying why Intel is doomed based upon the CISC vs RISC, 1980s argument.

2 more replies

14 points · 6 months ago

Performance in rosetta was fantastic

Uuuh...I detect a severe case of rose-colored glasses. I mean, rosetta was impressive for what it was, but...fantastic? Most rosetta software ran waaaay worse on (nominally much faster) Intel CPUs.

It certainly was far from near native back then, and even to this day there hasn't been a cross-arch (re)compiler that has come close.

If they truly reach near native performance with W10 on ARM, that would be a serious breakthrough for computing in general, but I believe it when I see it.

Consider how well it performed for its time and now consider how much better Microsoft's solution will be now that we've all learned how to build better JIP compilers. Also as we have faster machines now, we're able to dedicate more CPU time towards analyzing the x86 instructions in order to find the most optimal native translations. This process will only have to occur once due to caches.

4 points · 6 months ago · edited 6 months ago

Yes, I've considered all this. And you seem to severly underestimate the difficulty of the problem.

What makes it difficult is that you don't have high-level code to begin with that lends itself well for compilation. You have machine code fully optimized to run on a particular architecture, down to choice and order of instructions, register and (L1) cache considerations etc. All the things compilers can do to make that code run fast have already been done, the original information based on which those optimizations can be reasoned about and performed isn't there anymore.

It's a much harder - possibly even unsolvable - problem to reason backwards from that level to find an equivalent sequence of instructions for another arch that will have equivalent performance in general.
And most developments in "normal" compiler tech won't be helping you here - yes, V8 has become incredibly fast, but it expects Javascript, not x86 machine code. There's a lot of stuff done in this area, especially in the enterprise/mainframe area. And even IBM (who acquired the Rosetta guys from Apple) settled to a solution where dedicated xeon(x86)-based "proxy" blade servers would transparently run x86 binaries in a z/OS(POWER-based) environment instead of recompiling them.

(And again: "Rosetta performed well" is relative. Rosetta still was several times slower than native in CPU-bound situations.)

27 points · 6 months ago · edited 6 months ago

You really don't think Intel is worried at all that Microsoft has Windows 10 on ARM in addition to a transparent rosetta like runtime transpiler?

Absolutely not, because the ARM architecture is a teeny, tiny little toy for babies compared to a modern i7. Consider my first Core i7 920 to the Snapdragon in a modern Android phone:

Now look at the GeekBench scores. The 10 year old Intel design is still 3X+ faster than the ARM design.

Now compare that to modern i7:

It destroys it. It's 6x+ times faster than the ARM design. ARM has far fewer execution units, so it simply can't compete. And never will, without a complete redesign that would kill it as a mobile processor. See, that's what you are missing. It's only successful in the mobile space because it uses so little power. And it uses little power because it has little execution pipelines. RISC/CISC has nothing to do with it.

Those x86 instructions were created pre-P6 microarchitecture for true CISC chips.

A. RISC instructions are a subset of CISC instructions. Hence the whole "reduced" thing.

B. All modern AMD/Intel parts are x86-64 designs, which is an effectively modern hybrid architecture that blends the best (and worst!) of both CISC and RISC architectures.

C. The internal of the i7 is a RISC core with a transparent rosetta like runtime transpiler that breaks down CISC instructions into RISC-like micro-ops.

So, basically, Intel already built a better RISC core than ARM did. And then built a hardware transpiler on top of it to allow it to run legacy code with no performance penalty!

It gets worse for Intel's competitors when you realize they can build an i7 for the mobile computing market and can effectively emulate a low-power competitor simply by clocking down and disabling features. And then you can plug it in at your buddies place and game with him!

Indeed, Intel has given up on the smartphone market. Because of low margins. They will continue to build PC parts forever.

Anyways, I work at a STEM Uni. The kids show up these days with PCs, smartphones, consoles, tablets, etc. They didn't replace one with the other.

7 more replies

1 more reply

3 more replies

A small correction, Android actually had simultaneous use of big and little cores first with the Exynos 5 Octa back in 2013 and global task scheduling has been standard since about 2014. Whereas Apple's first globally scheduled bit.LITTLE SOC was the A11 released in 2017. Otherwise a very interesting post!

that team from P.A. Semi has designed Apples ARM CPUs for the iPhone ever since

It took them until 2012 to ship an actual custom CPU though, with the A6. They've been using ARM Cortex cores before.

first to allow the use of "big" and "little" cores simultaneously

naaaah. Samsung shipped a big.LITTLE Exynos in like 2013.

In five years from now we likely wont even recognize what MacOS and Windows are anymore

Software is extremely hard to kill once it gets even slightly popular. There are still mainframes running COBOL programs out there in the world, mostly in airports and old banks and such.

they physically can't compete speed/heat/size with ARM now

ARM is ahead on size, but really behind on speed. Where are the ARM chips with workstation-grade performance? Cavium makes 48 core ThunderX's but their single core performance is significantly behind x86. Apple indeed has better single core performance than most other ARM CPUs but it's still not close to desktops.

Sure mobile devices are getting more popular for web browsing, but the high performance market will NOT go away.

Side note, Intel is indeed starting to lose. To good old AMD, that is. Zen is an incredible success story already. Imagine what it will be when they get to the 7nm process! Intel is still struggling to get reasonable yields on their 10nm. AMD / Global Foundries will kick their ass hard.

There are still mainframes running COBOL programs out there in the world, mostly in airports and old banks and such.

Modern z/Architecture mainframes are pretty nice, and there is, of course, modern software being developed on it.

It also just so happens that IBM are the fucking undisputed Kings of backwards compatibility. Because the COBOL program written back in 1964 on the then so-brand-new-the-serial-number-is-in-the-single-digits System/360 Model 40 can still run, unmodified, on z/OS today.

4 more replies

naaaah. Samsung shipped a big.LITTLE Exynos in like 2013.

Did it allow simultaneous use of both sets of cores, as the other person emphasized? I can't remember.

Where are the ARM chips with workstation-grade performance?

Yeah, that's the big question when these conversations go towards arch switches. It makes little sense to switch only part of the intel lineup; so how do they switch the big stuff?

Truth is that intel failed in the mobile space, but they jealously defend the workstation-and-up space, where absolute power levels are also far less of a concern. There's TDP (or "SDP") for total power, performance/watt, and total performance levels, and workstations care much more about #3 and #2 than #1; as long as it fits inside a healthy envelope, it's okay.

As far as consumers go, very few are interested in high end workstations. You or I might be the exception, but the majority of people probably already own and use machines with processors less powerful than the A10X.

1 more reply

Imagine what it will be when they get to the 7nm process! Intel is still struggling to get reasonable yields on their 10nm. AMD / Global Foundries will kick their ass hard.

Source on that? I admit I haven't been following the field, but my understanding is that Intel has been pretty good at maintaining their tech lead at the fab.

Their clock is broken. The next process technology was due 2 years ago, and we will be lucky to see it this year. They literally hit a wall with euv and 10nm.

5 more replies

32 points · 6 months ago

I don't know your background, nor how much stock to put into this, but this was a fascinating writeup. One of the longest posts I've completely read through. Thanks!

22 points · 6 months ago · edited 6 months ago

Lots of inaccuracy and assumption though. See the other responses to their comment.

Pentium is not a RISC chip - Pentium of 1993, P60 and P75, was quite the opposite of RISC. Long, deep pipelines and massive complexity. You're mashing up history and conflating MMX which came a lot later and did use specific AVX RISC style microcode and a specific mobile Pentium which did use RISC style design.

In 93 there was nothing to compete with Pentium, just as there was nothing to compete with 486 DX in the period from 90-92. The market was focussed on raw maths performance and all of Intel's real competition had been making successful 486 clones in that period.

2 more replies

While x86 (Intel) chips are power inefficient compared to ARM at low power, they are the best you can get in the domain of high clock speeds and perf/core. Apple decided that their laptops should be netbooks instead and high performance per core is not what a Macbook should be about. This is reinforced by the decision to put shitty cooling solutions on their Intel powered laptops, which hold back the processors the built in . So it makes sense to go with a cheaper ARM CPU, but if you need a laptop with high performance, it doesn't.

5 points · 6 months ago

Apple should just create their own CPU for the Mac ... If only so they can call it the Apple Core...

1 more reply

33 more replies

38 points · 6 months ago

Has already been answered, but to simplify during the early days of Android they wanted it to run on a wide, wide range of hardware from ARM to x86 architectures.

iOS was designed for ARM, and ARM alone.

Therefore Android uses virtual machines to maintain compatibility across platforms, whilst iOS doesn't and they run natively.

VMs need more memory than a native application. The very nature of JAVA is to run in a VM, so Java applications on PC and all other platforms are interpreted on the fly, C-based applications and other applications are not interpreted, and run "natively".

7 points · 6 months ago

iOS is just a customized Darwin, the basis of OS X. If you get root access to a machine it's all laid out exactly like OS X is and is closer to how FreeBSD lays out its file structure than Linux.

Apple also has extensive history in porting their OS to new platforms with little to no interruption (for the most part).

They moved from 68k -> PPC. Then from PPC -> PPC64. Then PPC64 -> x86/x64.

They used to package apps as 'fat binaries' which meant the same App would run on 4 different platforms. They also made it headache free for the developers. Adding a new platform was just checking a box. As long as you didn't do anything too weird in XCode it would "just work".

1 more reply

1 more reply

209 points · 6 months ago

RAM on Smartphones is mostly used for multitasking, which means keeping more apps open at the same time. If a windows pc runs out of ram, it just takes the data of a process which isn't actively used right now and writes it to the Hard Drive, which means the process keeps running, but if you are trying to use it again you have to wait for a short ammount of time until it is responsible again. iOS and android dont do this, because it would cause a lot of wear on the integrated flash storage. Instead, when they run out of memory, they terminate a background app, so that if you open it again after that, it won't be where you left off, which is bad for the user experience. E.g. if you play some game on your smartphone, but you switch to whatsapp to write a message and check something on your browser, when the Smartphone runs out of RAM it will close the game, so if you switch back, you have to load it up again and maybe lose some progress. To avoid that, android phones just have a ton of RAM, but iPhones have a very sophisticated compression technique to store more inactive apps in the RAM. Candy Crush takes about 300-500 mbytes of RAM while active on both iOS and Android, but if you switch to another app iOS can compress it to about 40 mbyte, while on android the size does not really change at all.

Original Poster41 points · 6 months ago

Thanks, this is what I wanted to know.

26 points · 6 months ago

It's cool that while largely the two phones aren't massively different from an outside perspective asides for the OS, both do things differently behind the scenes that most people don't know about. You end up at the same destination but the route taken is different on both.

The different route is that Android phone manufacturers have been eating the cost of the higher end hardware (more RAM, faster CPU) needed to maintain competitive performance. Before ART, this was far more the case. It's surprising to think about how much unnecessary battery usage tens of millions of phones running Dalvik were gobbling.

9 more replies

13 points · 6 months ago

And is there a reason why Android doesn't use this sophisticated technique as well?

And is there a reason why Android doesn't use this sophisticated technique as well?

They have another technique that solves the same problem. When an app is closed, it is told that it will happen and get a chance to save its state.

Say you have a notes app open. The entire note and other stuff is in memory. Lets say 1 MB, it could very easily be much more. The app is now told that the memory will be cleared and asked what it wants to save. The note is already saved in storage, it's only in memory so it can be displayed on screen. So the app only saves the filename and the keyboard cursor position, say 1 kB of data.

When you switch back to the app, it opens as if it was the first time you ever used it. Except is sees the saved state, opens the file and moves the keyboard cursor position to the last position.

To you, it will look like the app was never closed, except maybe you notice a slight delay while opening. Just like on iPhone.

iOS has those software hooks too: applicationWillTerminate and didReceiveMemoryWarning. Youre supposed to handle cleanup there.

It restricts what those background apps are able to do.

1 more reply

Thanks for the explanation. I am interested to know how blackberry OS Works comparing to IOS and Android when multitasking.

BlackBerry 10 was a super smooth, and stable multitasker. I know it was based from QNX, but I also would like to know more.

Flash lasts more than enough cycles to use as a swap, that's not a reason to not use it.

It’s slower (save and fetch process, not access)

I think this is one thing people have a hard time wrapping their heads around - yeah SSDs are fast and can survive a ton of write cycles, but phone flash storage is not quite the same. Just look at phone storage speedtests and you'll realize that only the super top percentage of smartphones actually have decent storage speeds....and even then those decent storage speeds are paltry compared to desktop SSDs.

3 more replies

Citation needed. Especially if the page size on the phone is 4096 bytes but the flash block erase size is higher.

is there any documentation on their compression technique or is it more of a trade secret?

8 more replies

I think most comments are missing the biggest thing and that's what the operating system does with apps in memory that aren't active. In short android keeps it in memory and it can execute tasks in the background (though it is moving to restrict background services), while iOS has only a few specific things that apps can do in the background and may use compression to reduce the ram usage.

More info:

7 more replies

While we could get really detailed talking about memory management here, it's more about what was more important to the set developers as there are benefits to both approaches.

Simply putting it, most of this has to do with what each OS did with apps in the background. iOS puts the app into a kind of "sleep" function. Due to this it uses less memory, bit the trade-off is it can only perform certain tasks. Android, really just puts the app in the background running, meaning it can perform most tasks. Both will kill apps of they need to open memory for something else.

Some of the decisions for this are based around that iOS is a much more closed off system while Android is an open system. What I mean by this is that iOS really comes with some things pre-installed that can't be deleted or replaced (keyboard, sms viewer, etc), while you can on Android.

It really comes to different approaches the operating systems take and what they prioritize as important to the user experience.

Apple just has control over the entire software and hardware aspects of their phones.

This allows them to standardize their code across a small set of devices. This standardization allows them to optimize their code to run on very specific hardware configurations.

Android (google flavor specifically) only barely controls the software, and doesn’t control the hardware, given their open source strategy.

Android has to work well on a myriad of hardware, and to some extent, a myriad of different software flavors. The carriers and vendors can make enhancements to the software. Because of this fragmentation of the hardware and software, it’s not cost effective to have to optimize 100% for every possible application of the software and hardware. Android’s promise is that it will run almost awesome all the time. It does this by throwing more resources from a hardware perspective (more ram, better processor, etc.) these hardware changes also allow the different vendors to differentiate themselves amongst each other, and allow them to prove their phones accordingly.

This was all more evident in the early days of smartphones. Im am iOS guy myself, but even I’ll acknowledge android runs pretty solidly these days, and the issues are more subtle.

24 more replies

229 more replies

Community Details





**Explain Like I'm Five is the best forum and archive on the internet for layman-friendly explanations.**   Don't Panic!

Create Post
r/explainlikeimfive Rules
1. Be Nice
2 All submissions must seek objective explanations
3. Top-level comments must be written explanations
4. Explain for laymen (but not actual 5-year-olds)
5. ELI5 is for factual information, not opinions
6. Loaded questions are not allowed
7. Search before posting; don't repeat old posts
8. Don't guess
9. Don't try to trick the bot
10. Posts must begin with "ELI5:"
Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.