I find this pretty wild.
Apple wants to put a camera in its phone that can take photos similar to what you accomplish on an SLR with a big lens with a big aperture. Can't get the optics to fit.
So instead: They put two cameras in the phone, each with a small aperture; perform intensive computation in order to create a depth map based on the offset of the two photos, then digitally apply gaussian blur to the right areas of the image.
Same effect?