Digital Sensors: CCD vs CMOS
There are two types of sensors in in use in modern consumer digital cameras – CCD and CMOS digital image sensors. (Charged Coupled Device and Complementary Metal-Oxide Semiconductor.)
There are other kinds, but they are reserved for cameras in super high-end applications. We’re gonna focus (hah) on digital sensors found in cameras that we, or you have access to.
There are clues in the names to help you keep them straight. “Coupled” suggests a joining or being linked together – bringing to mind coupled rail cars – and a “semiconductor” shares so much with a microprocessor that for our purposes it doesn’t matter.
Understanding the similarities and differences in CCD and CMOS digital sensors and how they work can make a big difference to photographers. This involves three considerations:
- How they perform for photographers in different (light) situations.
- The kind of images possible – given optimum conditions.
- Your budget. How much do you want to invest? (We chose that word carefully.)
Let’s get busy clearing up some misconceptions! But before we get to that, where are our manners?
Welcome To The Lone Loon Song Deep Dive Into CCD And CMOS Image Sensors:
First of all, welcome. We’re mighty glad you’re here. Otherwise, we’d only be talking to ourselves and only crazy people or people with a lot of money in the bank do that. So without you, dear reader, there’s a legitimate concern that they might just pack us off to the noodle house. (Who are we kidding? They might yet, in any event.)
We don’t want to waste any of your time – and we’ve got a lot to talk about. You’re probably here from a Google search, and therefore looking for something specific.
Here is our Table of Contents: (You’re certainly welcome to read this article straight through from soup to nuts, as God Intended, but feel free to bounce around if you’re in a hurry and need to address or solve a problem with specific information. We prefer not to make our T.O.C. sticky because we find that annoying, awkward, and slow to load. YMMV. Use the right mouse button or the back button to get back here. We’ve also included links back to it.)
Hey! We’re In A Hurry! Does This CCD – CMOS Digital Sensor Thing Have a Table Of Contents?
Are kittens warm, fuzzy, and adorable? Of course, they are, and “Damn right we do”: (*No shade to puppies.)
Table of Contents: Digital Sensors For Cameras: CCD or CMOS?
- Digital Sensors – Both CCD and CMOS 101:(Read this first if you’re in a hurry.)
- The Basic Job Of A Digital Sensor
- CCD and CMOS Sensors – Similarities:
- There aren’t any pixels or megapixels (MPs) on a digital sensor.
- CCD and CMOS Sensors: How They Work:
- A Big – And Consequential – Difference – File Types RAW and JPEG
- Only As Much History As Necessary: (A Smidge more, perhaps.)
- Megapixels and Printing: A Perfunctory Look
- Conclusion
Digital Sensors – Both CCD and CMOS 101:
The most common sensor found in cameras widely available to the public is one of two types: either CCD or CMOS. (Charged Coupled Device, or Complementary Metal Oxide Semiconductor, respectively. The CCD sensors are an older technology, first becoming popular with consumers from around 2005 or so. Although cameras with this kind of sensor are no longer being produced for the consumer market, being replaced by the CMOS sensors – the second-hand market for cameras with these sensors are enjoying something of a renaissance, with users singing their praises. (A lot of these, paradoxically, are Gen-Z’ers who like the way the “point and shoot” CCD cameras look on TikTok. They’re driving the second hand price up. We can’t really recommend them. We do love the DSLRs from this era. Our Olympus is a lot of fun, but nobody ever accused it of being a “good camera”. Really. We’re not being mean, we own one! Just calling balls and strikes, here.)
Although prized for their low-light performance, large dynamic range and low noise performance, they’re not very energy-efficient, or nimble to use. Nevertheless, many of these DSLR full-frame or APS-C cameras have thousands of loyal fans. We think they are an outstanding bargain. (So much so that we’ve decided to write an article on these incredible cameras. Look for it.)
In any event, both of these sensors work in largely the same way. Small cavities called photosites on the sensor harvest the intensity of colour and light, and turn that into an electrical charge. Each photosite passes it’s charge to the next one (a coupled device makes us think of a train. Canon uses the metaphor of a bucket brigade). At the edge of the sensor the charge is collected, amplified, digitized and sent to the camera’s image processor which turns that data into a digital image. That it uses fewer amplifiers and analog to digital converters keeps the “noise” down to a minimum.
Starting around 2010 or so, cameras with the CMOS sensors were introduced. To address the inefficiency of the CCD sensors, a new technology was introduced. These sensors introduced processing at the photosite level. Much cheaper to make – they’re basically microprocessors and thus can be manufactured at scale – these energy efficient sensors gave photographers the ability to take many more photos per charge. A somewhat pleasant surprise was that since more processing was done at the photosite level, they were much quicker, and this nimbleness allowed quicker bursts of consecutive photographs.
Although far less of an issue today, in those early days, the processing at the photosite level that brought those benefits also brought a kind of distortion, called “noise” which was caused by the higher number of amplifiers and Analog to Digital converters. Today, software innovations has largely solved those concerns.
Still, for applications where batteries are less of an issue, CCD sensors have their champions. It’s interesting that they are still prized in areas where superior image quality is a priority. They are still often used in applications like health sciences, microscopic imaging, photographs from Mars, and so on.)
We’re going to restrict our remarks to how these sensors work, and how that affects their performance, but it helps to think a bit about what brought us here.
The vast majority of photos taken today are not going any farther than a website, Facebook, (1080px wide for most, 1200px wide for landscape – whatever your ratio), Instagram (1080px wide, whatever your ratio), TikTok or YouTube. The most popular – that could be the wrong word – let’s say the most commonly used camera in 2024 is a camera that’s in your phone. And why not? It’s perfectly acceptable for nearly everything except printing.
You know that old photographers saying? “The best camera is the one you have with you.” We always have our phones, do we not?
The Basic Job Of A Digital Sensor
No matter which kind of digital sensor your camera has, it has one basic job. Like film before it – or even now to lots of folks who still love film – that job is to catch the intensity of light and colour that’s coming in through the lens. Unlike film, sensors are an electric device. They have millions of photosites which “catch” the photons of light that come through the lens. Photosites measure the intensity and colour of the light and produce an electrical charge. This electrical charge is digitized, and passed to the camera’s image processor which produces an image created out of pixels.
Both of these types of digital camera sensors work in similar ways but with some important differences. Therefore, understanding how both types of sensors work is useful. We’re not saying one is better than the other. Each has strengths and weaknesses. Is a carpenter’s hammer better than a framer’s hammer? Yes, and no. They’re designed for different purposes. But before we get to that, you have to consider this:
Where are your images going to end up? It matters. Luckily, you can figure this out quite easily by answering a few simple qualifying questions, to wit:
- Are your images going to end up in print? Brochures, glossy magazines, that sort of thing? Even a calendar, or a print displayed on a wall? (Be advised: There is nothing inexpensive about this. Why, those folks are even snobby about the coffee. Printing is where the number of megapixels are truly going to matter.)
- Are your images going to end up primarily on a screen? Online? Your Facebook, Instagram, TicToc, YouTube, email, or the like? (Far less expensive, but depending on the project, it’s unlikely to be exactly cheap. Although, believe us, it’s vastly cheaper than film – the scanners and whatnot. Kids today. We had to walk to school through the snow, carrying a log – uphill both ways.)
- There is a third option: Are you going to keep your options open – meaning you’re interested in print as well as digital distribution? (This will be just as expensive as print, but the argument could be made that it’s a better value given that you’re using the same gear for both printing and digital distribution. Warning: it’s a slippery slope, and nearly all avid photographers and/or professionals end up going this route.)
You don’t have to make up your mind right now. That’s why we put together this little labour of love.
Having said all of that, here we go:
We test and review outdoor stuff. This is all either stuff we like or not. We intend to be as honest with you as possible because we want to be useful and earn your trust. If you purchase something through our links, we’d be honoured, but please know that we will earn a commission, at no cost to you. “We’ll tell you nuthin’ but nuthin’ but right, Donny.” Al Pacino in Donny Brasco. |
CCD and CMOS Sensors – Similarities:
CCD sensors are an older technology. (That’s pretty funny. Yep, “old and in the way.” Not!) Paradoxically, they make use of the same MOS (Metal Oxide Semiconductor) technology that is used in the CMOS sensors, which are a “newer” technology. It’s kind of an interesting story – or not …
Invented in 1969, or 1970, it wasn’t until the mid-80’s that the first mass-produced consumer CCD video camera, the CCD-G5, was released by Sony in 1983.
Both kinds of sensors are covered with millions of tiny photosites or sensels (we like “photosites”. They’re also called “Sensels” is a word that comes from sensor elements.)
Whatever you call them, don’t call them “pixels”. Don’t fan those flames. As noted earlier, photosites are little cavities that “catch” light. In either sensor type’s case, they turn that light (photons) into an electrical charge. They both have a filter that assigns colour – usually called a Bayer Array – through which the light passes on its way to the photosites on the sensor. (We like Bayer Arrays, so we’re gonna talk about them in depth later on. We should mention that except for the medium frame sensors – not discussed here – Fuji has a different colour filter called an X-trans filter which handles the colours. They say it’s a 6 x 6 random filtration method used in their full frame or crop sensors. Somewhat paradoxically, Fuji uses the Bayer Array in their larger medium frame sensors. They must have their reasons.)
Please pay attention to this: A photosite (or sensel) is a physical thing taking up space on a sensor. You can measure it. If your eyes are good enough, you can even measure it when there is no power to the camera. It’s measured in “micrometers”, sometimes called “microns”. (The symbol for a micron is µ). A pixel is a digital abstract that exists in a digital image. It doesn’t exist after the electricity is turned off.
There aren’t any pixels or megapixels (MPs) on a digital sensor.
MegaPixels are 1 million pixels. They have a lot less to do with “image quality” than a lot of people would have you believe. There is currently an MP race among manufacturers. Quite a few people are under the impression that more is more. By this, we mean that a higher number of MPs will lead to higher image quality, or that if your camera has a higher number of MPs it’s “better”. That’s true just often enough to make things quite confusing.
When an image is generated it has a resolution that is measured in pixels, or MPs. This is determined by the linear resolution. “Linear” means a straight line. Digital images are always square, or rectangular, and are measured by width and height. (Two straight lines.) Here’s a quick example – we take a digital photograph with our iPhone 13. We pull it into our computer, and get its info. (For us we select it and press cmd+i or just right click it.)
We learn that it’s dimensions are 4032×3024 (pixels). To get the resolution, we need to multiply the width by the height.(4032×3024 = 12192768.) If we divide that by 1 million, we’ll get 12.192768MPs. That’s a little more than 12MPs. This is because there are generally more photosites than there are pixels. The ratio is never 1photosite to 1 pixel exactly.
Before the photons (light) strikes an individual photosite, it passes through a colour filter array which assigns a colour to the photosite. One colour to a photosite. The most common array is called a Bayer Array. It’s a grid that is covered with red, green and blue colours.
Apple says that our iPhone 13 camera has 12MPs. Each pixel in our image may make use of the output of several different photosites that will all contribute to the colours of the image, saturation, or some other attribute. The camera’s image processor has processed the digital data into pixels which make up the resulting digital image. It’s like a mosaic. One of those old Art techniques where different coloured stones make up a picture. You’ve seen them. Here’s a website featuring Beautiful Mosaics From Around the World.
So that’s sort of how pixels work. (Except mosaics don’t disappear when you power them down.)
In fact, there’s an extremely cool process called demosaicing.
Demosaicing ( which can apparently be spelled any way you feel like) – also known as color reconstruction, is a digital image processing algorithm. When the image processor gets the data from the sensor, it’s often not complete. So the algorithm basically guesses which colours to fill in the empty spaces between colours in order to finish the full color image.
Resolution:
Resolution refers to the total number of pixels in the image. For example, if you right-click on the Kumoba Pond image, above, and select “open img in new window”, you’ll see the linear measurements of pixels in width and height. (500 wide x 667 height. This is an aspect ratio of 3:4 – but since we took it in portrait (holding the camera vertically) it’s 4:3 reversed. Portrait is taller than wide, landscape is wider than tall.) To learn more about aspect ratios, here is an aspect ratio calculator.
To get the resolution of the image, we need to multiply the width by the height. 500×667 = 333500 pixels. To get megapixels (1MP is 1,000,000 pixels) you divide by 1 million. This image is .33MP. (MPs are always rounded off.) Most resolution is measured in MP. (The original image had a resolution of 12192768 pixels, or 12.2MP. (3024×4032) Our iphone’s sensor is (incorrectly) listed at 12MP, so there you go. We mean that the camera has 12MPs. The sensor has photosites.)
When an image is on a screen, it’s measured in pixels per inch – or ppi. However, since pixels are an abstract digital construct which cannot exist offscreen, if you want to print the image, it will be measured in dots per inch, or dpi. Printing requires dots of ink to be sprayed on paper and uses an sRGB or a CMYK colour space printing model. These keep things reasonably simple. Cyan is a greenish-blue, Magenta is a reddish-purple, Yellow is yellow, and Key is black. A printed image is produced from these colours. You have to choose this model, or mode, before you export it from your image editor. Printing really needs an article of it’s own. Hope you like math.
Having said all of that, since photosites are an actual size, when someone says “This full-frame sensor in a Canon R5 has 44.7 *MPs*”, they’re really talking about photosites, and the number of photosites on the sensor and pixels on the screen are never the same number. What they generally mean is that the largest image the camera is able to produce, when viewed onscreen, will be close to 44.7MP. The numbers are so large, however, they’re always rounded off. The Kumoba Pond image wasn’t really .33MP – it was .3335MP.)
Now, take another full-frame sensor. Both are full-frame sensors. But – a Canon R3 has 24 *MPs* (photosites.)
The photosites must be larger on the R3, right?
Of course. We can slice your pizza into eight slices, but each slice will be smaller than it would be had we sliced it into six. But no less delicious. And no less filling. However, a smaller pizza sliced into eight slices is not going to feed as many hungry people, is it?
There’s a bit of a conundrum: sensors with a higher number of photosites are said to provide more detail. However, a smaller sensor with the same large number of photosites probably is incapable of producing images with the same amount of detail.
We humans like to use numbers to decide which is better. While it’s easy to assume that a higher megapixel count directly relates to better image quality, this isn’t always the case. As mentioned earlier, a larger sensor can capture more light and has a more significant impact on image quality than the pixel count alone. A high megapixel count on a small sensor can result in noisy, less precise images.
We have yet to mention other non-trivial factors such as lens quality and camera body.
While you shouldn’t just dismiss megapixel counts out-of-hand, it’s just one element to consider when evaluating cameras.
AAMOF, we’re prepared to go a little farther: even the term “Image quality” itself is a little misleading. What kind of image? Print image? Screen image? What size screen? We don’t mean to rain on the parade, but the size of the sensor, the lens, the skill of the photographer – and of course, the big Kahuna: luck – being Johnny-on-the-spot with a camera and a charged battery – likely have much more to do with image quality. (Caveat: “Unless you’re printing and printing cropped images”.) See Megapixels and Printing:
CCD and CMOS Sensors: How They Work: (Simplified as little as possible.)
- As with film, light passes through the camera’s lens and strikes the sensor. (Again, a necessary simplification – but one that does us no harm and is not inaccurate – the film is struck in the case of film. Otherwise, we’d be here ’til Christmas. Okay, we’re done with film.)
- Sensors contain millions of light receptors – little cavities – called photosites. (Photosites are sometimes called sensels – Sensor Elements. They are Not pixels. Jesus, that’s annoying. Please don’t call ’em pixels or you are contributing to the confusion. Pixels don’t come into the picture (hah!) until after the camera’s image processor gets involved. To un-confuse yourself, read our little rant on why pixels aren’t photosites. The back button will drop you right back here.) The photosites “catch” the available light and measure the intensity and colour of the light and turn it into an electrical charge. (By the way, no matter the size of the photosites or sensels, or how many there happen to be, it’s the overall size of the sensors that matters. It’s not the number of photosites. It’s like a pizza: You can cut it into 6 pieces, or eight. The amount or quality of the pizza is unchanged. Weigh it. Same thing with the amount and intensity of light captured. Manufacturers are cramming more and more (smaller) sensels or photosites on sensors, but we remain unconvinced that even though the number of photosites ( often incorrectly referred to as MPs) increases, the same can’t be said for the quality of the image. Full Disclosure – that’s subjective – but we can’t see it. We can see a difference in images when the sensor size is larger.
- Each photosite has either a red, a green or a blue colour filter through which the light passes. This enables the photosite to capture not just the intensity of the light, but the intensity of that colour of light. (RGB). These colour filters are found in an array that overlays the photosites on the sensor, with each of the photosites getting their own colour. They look a little like an unsolved Rubiks Cube, provided that the boxes are only red, green, and blue. (The most common filter array is called a Bayer array. Fun Fact: There are almost twice as many green-filtered photosites on the Bayer array as there are reds and blues. It turns out that our eyes are more sensitive to greens than to reds or blues. We had no idea, either. Colour us surprised. Hah! See what we did there?) The point is, the photosites record the intensity of both light and colour, and turn that into an electrical charge. The more intense the light and colour, the greater the charge. (We should mention that some sensors make use of a microlens array as well. That lays over or under the Bayer Array and directs the light directly into the photosite, like a funnel. This is so the available light is not “spilled”, or wasted on the sides of the photosite.)
- But, an electrical charge on a photosite on a sensor is all you have. There’s no image, yet.
- Here is where there is an important difference between the two kinds of sensors: The CCD is still an analog electrical charge. It won’t become digital until it leaves the sensor. The charge gets passed from one photosite to another – hence the “coupled” in the name. (Canon uses the metaphor of a bucket brigade to describe this – we like to think of walking from one car in a train to the next.) All of the data from the photosites is collected at the edge of the sensor at a device called a serial shift register. From there, the electrical charge has to be amplified (the electrical signal goes through an amplifier – which adds noise – a kind of visual distortion ) and then converted to digital. Up until this point – the CCD is all analog. To convert the amplified electrical signal to digital the charge gets sent through an analog-to-digital converter. (ADC. That also adds noise – which is a kind of visible distortion – bright little dots that just show up, apropos of nothing, usually in low light situations.) In any event, the path for CCD sensors is: Photosites to photosites to serial shift register to amplifier to ADC to the camera’s image processor, which takes the digital data and creates an image out of pixels. (That’s where both the actual image and the pixels that make it come into the picture. (Hah!) The good news is that the CCD method has less processing – fewer amplifiers and ADCs produce lower noise and greater dynamic range – which many photogs believe results in a superior image.)
- The CMOS sensors work in a slightly different way: they convert the electrical signal to digital much earlier in the process – often at the photosites stage – we said that they are basically microprocessors, right? (Well, semiconductors, at any rate. But they’re improved semiconductors that can do a lot more than just catch light and colour.) Each photosite has a microprocessor with its own amplifier. Each column of photosites on the sensor’s grid has an ADC. (Sometimes, each individual photosite itself has its own ADC!) So there is a lot more processing earlier in the process – more amplifiers and ADCs at the photosite level. As we’ve seen, this adds noise. The good news is, the path is shorter, and the processing of the photosites charge happens more quickly and efficiently.
- This is where the twain part. The CCDs use more electricity – you get fewer shots per battery charge. This energy inefficiency is noticeable. On the other hand, because of less processing – fewer ADCs and amplifiers – you get greater dynamic range and low light performance with less noise.
- The CMOS sensors were largely designed to address this inefficiency. Instead of just catching the intensity of light and colour at the photosite level, they did the amplification too. Some CMOS sensors digitize the signal at the end of the column, but some digitize right at the photosite level, too! This had the benefit of not only saving energy, but speeding up the whole process. This allows the data to be passed much more quickly to the camera’s image processor. Thus, the CMOS are noticeably nimbler – allowing fast bursts of shots. The problem is that more amplifiers and ADCs add more noise. Either way the data gets to the camera’s image processor and is turned into a digital image.
Okay. We now have a digital image composed of pixels. Here, once again, the two separate sensor types converge and from this point on, the process is once again largely the same. But!
Now – a decision has to be made: What do we do with this digital image? Well, we save it. It’s a file, and it goes to the memory card. However, we have to decide what kind of file we want. (This is an absolutely necessary digression. We’re going to devote a lot more time and ink to this later because this is Big Stuff – but for now, we’re going to keep things as simple as possible without making things too simple. Goldilocks. “That’s what we do!”)
Image File Types
The image produced by the camera’s image processor can be saved in three basic types of image file:
- A JPEG file. (First Developed in 1992 – That’s why we have the Mosaic image at the top of this article. This is courtesy of the Joint Photographic Experts Group. This file type compresses files largely by tossing away data. It’s often called “lossy”. They are certainly smaller files. Smaller files take up less space on the memory card, your hard drive, and on your website. They work straight out of the camera, too, with no processing required to serve them up just about anywhere you like.)
- A RAW file. (This saves all the data it’s possible to collect. They are much larger files, but as a result, the post processing possibilities are really only restricted by the skill of the person using the software. They must be processed into a different file to be seen outside of the camera. This is not available on all cameras.)
- Some combination of the two.
That’s how the two types of sensors work. Now we have to talk about File Types.
(We truly wish someone had told us this before – but for reasons that have never been made clear it takes some digging to find this:)
A Big – And Consequential – Difference – File Types RAW and JPEG
“Cheap cameras and good cameras are distinguished only incidentally by price“. – We, the Loons.
What really separates them – and we found two important distinctions – is that good cameras allow the photographer:
- Access to the exposure triangle (often in a variety of modes, which may allow access to all or just some of the elements of the triangle: ISO,(an internationally agreed upon way to numerically measure light) Aperture,(the depth of field, or how much of the image is in focus) and Shutter Speed (the amount of time measured in seconds or fractions thereof that the shutter stays open), and …
- The ability to choose to save image files in either the JPEG or the RAW file format.
Cheap cameras – whatever their price point – do not. They may give you one or the other, but not both. (Correct us if we’re wrong. This is important.)
There’s no way we’re gonna avoid getting all in the weeds, here. Sorry about that. We would if we could, but we can’t so we won’t.
JPEG files are easy, convenient, and small. Those are all good things. Plug your camera into your computer, and pull your JPEG files off, and they’re ready to serve, damn near anywhere you want. Email? Great. Print off a lost dog poster? Sure. Social network posts? You bet. And, they’re small – they don’t take up much room on your camera’s memory card or your hard drive. But – you lose a lot of information. Fun Fact: JPEGs discard more information than they save, and you’re not involved in the decision-making process. Images produced by the camera’s image processor are processed. While becoming JPEGs, the camera adds saturation, sharpening and noise reduction. The RGB colour value for each pixel is also calculated in a (completely brilliant) process called “demosaicing”.
But – It also limits the ability to process the images in post-production. Once you’re a french fry, you’re never truly a potato again. You won’t get all of that goodness back again. Yeah, you can process the images in a limited way, but in a lot of ways, the dye is cast.
RAW files are none of those good things.
- Easy? Well, no. They are a little confusing – do a Google search for “how do I open a RAW”… and Google’s autofill will complete your entry for you.
- Convenient? Pick another adjective. The only way to view a RAW file is either in the camera, or to open it on your computer with post-processing software. There are also proprietary file types. (Adobe products, for example, save RAW files in a proprietary file type called DNG – Digital Negative. Sony saves ’em as SR2, Canon saves ’em as CRW, CR2, or CR3 – we can do this all day), but all of these contain all of the information generated by the camera. Nothing is discarded.
- Small? See where we’re going with this? Nothing is discarded, as a result, they are much larger files; taking up more space on your memory card and hard drive. On the other hand, you have access to all of the data.
- Importantly: Unlike the JPEG, the RAW file cannot be viewed outside the camera without some kind of processing software. So there is a learning curve. So what? You’re here, aincha? Know how to use Google, doncha? Damn straight you do.
In addition to simply tossing data, part of this JPEG compression includes image processing inside the camera – adjustment of things like white balance, sharpening, noise reduction, and the like.
The resulting file is a complete colour digital image – ready to be served up just about anywhere you’d want to serve it up; Email, text, Instagram, Facebook, print it on your Christmas card or lost dog or cat poster – anywhere digital images end up. So, it’s super easy to use and convenient for most people. No shade: Plenty of professional photographers shoot in both file types – but they have a reason, and they can justify their choice. And – a key distinction – they are aware that they have a choice.
We want to be clear – a lot of great images were shot as JPEGs. And – JPEGs can still be manipulated by post-production software, and most can benefit from that. But the fact is that there’s just less data to work with. (In practice – that means that a lot of potentially great images that can’t be salvaged – they’re a smidge too over or under-exposed – or possess some small flaw – could have been fixed had they been initially saved as RAW files. Not always, but more often than not we were astounded at the difference – especially how our low-light images magically re-appeared!)
Only As Much History As Necessary: (A Smidge more, perhaps.)
In 1992, when images were first digitized, the Joint Photographic Experts Group created a standard format to be used on all platforms all across the world wide web. Guess what they called it? You bet they did. JPEGs were born.
We think our first Macintosh – an LC III – had a 250 MB hard drive. (We had used a DOS based windows PC prior to that but all we were doing were Lotus 1-2-3 spreadsheets and WordPerfect files. Media kind of sucked on windows in the early 90’s.)
We remember thinking that a 250 MB hard drive was enormous. We spent too much money that we didn’t have on an external Bernoulli drive and two 50MB bernoulli discs to back up our data, and we thought “Man, we are set for Life!” Is that adorable, or what? “Future-proof?” <wheezing> “Stop! We gotta pee!”
In those days, file size was an issue. We told you about pre-Mosaic completely doing without images, right? We might have confused Mosaic with another text-only browser. Mosaic definitely had images eventually, but it took forever. Nobody complained. Shit, we waited for WSFTP or Gopher sessions.Throughput was an issue. People used modems. (Ask your Grandpa.) Suffice it to say that smaller file size was better. Image quality was largely an afterthought. To make image files as petite as possible, JPEGs were designed not only to compress the files, but to also be lossy. “Lossy” is a euphemism meaning “throw out a lot of non-trivial data in order to minimize file size.”
When we started working with digital images, there were no digital cameras. Above and on the left is a scanner. We used those devices to process printed film (and sometimes slides – with a special adaptor type thing) into digital image files. JPEGs. Our friend (and professional photographer) Dave L. showed us how to scan photographic prints, pull them into Photoshop, and save them as JPEGs. (We taught him to write HTML – rudimentary as it was, in those days. It was a pretty fair exchange. We think it was harder to get him to use FTP and Telnet to put them on the server and publish his photos than to write the actual html. Teach him about how Unix views upper and lower case characters as different things. We had some memorable exchanges of opinion about image file sizes. He came to see things our way.) Then, as now, there was a little slider that had high quality (but large file size) on the right side – and file size (but low quality) on the left – and the trick was to get a balance of the highest image quality at the smallest acceptable file size. That was a lot of fun. It’s less so, now. Be sure to check the image preview option.
A cheap camera, price notwithstanding, will only allow the production of JPEG files. (Or some proprietary version thereof. It won’t allow access to the exposure triangle, either. Those are both Big Deals.) Our beloved old war horse Olympus 1050SW doesn’t talk about this. It’s as if there’s no such thing. It just doesn’t come up. Not so much as a mention in the manual. RAW files might as well be Unicorns.
We know more than a few photographers who started out just posting images on Facebook, but decided to release a product. A calendar for a Christmas gift. Food photographs as a favour to your sister for a cookbook. It starts simply.
After the advent of the World Wide Web and social media – which started for us in the early ’90’s but didn’t truly explode until the 2000s. The early digital cameras were far too expensive and impractical for mainstream use.
In the mid to late Oughts, the use of digital cameras began to experience a similar rapid expansion.
A Smidge of Math
MPs (megapixels – one megapixel is simply one million pixels – it’s just easier to count and say) are often used as a metric to justify why this or that camera costs this or that amount. All of the images on this page emerge from a camera of either 10MP (our Olympus) or 13MP (our smartphone). (Well, that’s not entirely true. The early digital Kodak image isn’t ours.) If you can see a difference in our shots – and are able to legitimately trace that difference back to MPs we’ll eat our crazy camping hat.
Well, you might say, “Loons, that’s only a difference of three MPs”, and while we didn’t do the math ourselves, we accept your figures. (Did you do that in your head? Respect.) Some of the most celebrated “professional” cameras in the mid-2010s (for printed images) had only 5 or 6 megapixels. All modern digital cameras have nearly twice that. So it’s not that big a deal in 2024, no matter what anyone tells you. We’ll talk some more about MPs and printing later, but that’s not our focus in this article.
One can only imagine your surprise when we share the important fact that there’s a lot of bad or just plain wrong information on the Internet. A lot of it is on “reputable” websites. We refuse to speculate on possible motivations, but it’s always a mistake to discount the profit motive.
Here’s another little secret about human nature: We really love it when we can reduce somewhat nebulous stuff to numbers, and then say, “More is more”. So it’s better, right? Well, sometimes. But…
Megapixels and Printing: A Perfunctory Look
Almost all cameras manufactured after 2005 have plenty of MPs for professional printing purposes. Printing is really beyond the scope of this article – mostly because we haven’t done all that much of it. You need to know a lot about dots per inch, or pixels per inch. (DPI, or PPI) In printing, (or extreme cropping – which we avoid) megapixels are much more important than on screen stuff. The fact is, if it weren’t to keep peace in the house, we wouldn’t even own the little personal one. There’s a great Japanese phrase, meaning “pain in the ass”, but it’s in the delivery. “Mendock-sai“. (Eye roll optional, but you might as well.)
In any event, you’re always better off going to a commercial printer. (You’ll thank us. It’s cheaper and far less aggravation in the long run. In the 90’s, we worked in a facility that had a large in-house commercial printer, and holy Mendock-sai, Batman!)
Here’s what we do know:
Printed images use Pixel Density. It’s pretty much what you guess it is. The amount of dots per linear inch. Resolution is expressed in either dpi (dots per inch – for printing) or ppi (pixels per inch – for screen.) Consider an image displayed at 300 ppi, (this is the figure you’re most likely to hear when folks talk about resolution and printed images) it means the image has 300 pixels per inch. This is generally for snapshot sized printed images.
The final size of your image depends on the resolution you choose. If an image is 4500×3000 pixels, (13.5MP – we needed a pencil for that – ) it will print at 15×10 inches if you set the resolution to 300 dpi. (300 pixels x 15 = 4500, 300 x 10 = 3000, 3000 x4500 = 13,500,000) but it will also yield an acceptable print at 62.5 x 41.6 inches at 72 dpi. (Because your audience will be standing farther away. We have an article that contains the formula for figuring out the dpi of your image and the size that it should be, and also the viewing distance. Size of print, dots per inch, and viewing distance is relative. Obviously, nobody looks at a billboard from 2 meters. We’ll likely post that today, or tomorrow. Keep an eye out for it.)
Keep in mind that you are only changing the size of the print, you are not resizing your image file. The existing pixels (well, they’re dots, now) are distributed in a different manner across the physical space.
(Our 2009 Olympus 1050SW camera has a CCD sensor – 10MP and a sensor size of ½.33” or 6.08 x 4.56 mm. Our iPhone has a CMOS sensor – Apple is a little coy about the size – we think it’s ½.33” or 6.08 x 4.56 mm – we could be off a bit – but it definitely has 12MP and nearly all of the images here were taken with either one or the other.)
The point is, if your camera was manufactured in this century, you probably don’t need to worry about MP count too much. The boys in Sales love it, but we reserve the right to our skepticism. Don’t take our word for it. Check the images. That’s all we have to say about that. (Apologies to Forrest Gump.) (We also have an article that talks about cameras from the second half of the 2000’s that have about 10MP and do everything you want, and they’re incredible cameras. That’s our new jam. Actually, our new jam is A Man Of Constant Sorrow By Colin D. Cochrane and The Whole Kerfuffle. Go ahead and leave a comment, like and subscribe – if in fact you do.)
Conclusion
We hope we have cleared up a few misconceptions about CCD and CMOS sensors.
Low light, long exposure shots with a tripod are vastly easier to get with a CCD Sensor in your DSLR. You’ll give up battery life and nimbleness when it comes to shooting bursts. (The one exception to the rule that springs to mind is the Nikon D700 which does have the full-frame CMOS sensor – and can be had for quite reasonable prices.)
Sharp, in-focus shots of Sports, wildlife, birds-in-flight and the like will be much more accessible to CMOS sensor-equipped cameras. The auto-focus on some of these newer cameras is unbelievable. The performance in low light situations – blue and golden hours should be fine, but full moons, or northern lights (aurora borealis) type shots will challenge even the most skilled photographer.
While we’ve tried not to generalize, a certain amount is unavoidable.
Important Stuff.
There are certain key things to remember. CCD sensors are still used in situations where great images are prioritized. Photographs from satellites in space, medical applications, microscopic photography – and the like – still prize them.
The vast majority of new cameras produced in the 2020s contain CMOS sensors. They’re less expensive to produce, require less power, and can also read off electrical charges at a much faster rate – without which a lot of high-speed sequences would be impossible. What’s more, CMOS sensors share the same basic structure as computer microprocessors, which allows for additional computational functions such as noise reduction and image processing right on the sensor.
On the other hand, for outstanding images in low light conditions – think Astrophotography – night skies, and so on, a CCD camera is not to be dismissed out of hand. That’s why they cost the big bucks. Or not. Look around, because now you know, and now is a great time to be looking. If you find the right camera – new or new-to-you – you’ll be able to make a decision you can feel great about. Check the actuation (how many times the shutter has been used – most quality cameras have 200,000 clicks in them, so if one says 50,000 or so, you’re gold. GIMP will show you: file – properties -advanced metadata – check the exif. Also, ask if it’s been dropped, or repaired.)
Thanks for reading, and if you found this helpful, please link to us, or tell a friend. Get Out There™ and we hope to see you.