What metadata is inside your photos?
Every time you take a digital photo or video, your camera or smartphone records a bunch of information about the image that you just captured. Information about something is called “metadata.”
Some of this metadata is fairly innocuous: the date and time you took the picture. The ISO, shutter speed, aperture, f-stop, and white balance are other, typical pieces of metadata.
Smartphones, however, are bristling with sensors, and have ridiculous awareness of their environment. Typical images will have several hundred of these little nuggets of information, each:
- They’ve got GPS latitude, longitude, and altitude, describing where you are, frequently accurate to within a couple feet.
- They know how you held your phone,
- where they’re pointing, and
- the compass reading when you clicked the shutter.
- They can contain your phone’s serial number, what the current temperature is, what the current barometric pressure is, how many seconds it’s been since you turned it on, your current battery level, and how long it’s been since you last charged your battery.
- They estimate how far away the subject you focused on was from you, and
- in the case of two-lens cameras, they may have a full 3d depth map of the entire image.
All of this information is also included explicitly, as metadata, in all of your images and videos, but recent machine learning techniques also implicitly extract who and what is in your images, now with accuracy that exceeds humans.
The companies that make their money by monetizing you just love when you give them your photos. You become a more valuable commodity to them because they can sell your attention, with advertising, for more money.
Why give them what they want?
Keep your data yours. Use PhotoStructure.