This FAQ is the product of my research and individual experiences. All contents are copyright © 2001-2006 Ronald Parr. While this is a sincere effort to give you the best and most accurate information possible, some biases and inaccuracies may slip in. I make no apologies for this. Your experiences may differ, your mileage may vary, etc.
The FAQ is organized into 12 sections. Please keep in mind that the section topics are not completely disjoint, so if you are having trouble finding something, be sure to check related sections too.
I'm Ron Parr. You can email me at "ronparr - at - yahoo.com". (You'll need to convert that to a proper email address.) Feel free to email corrections or suggestions. I probably won't be able to answer individual questions though. That would be a full time job!
So, you want a resumé? If only people approached everything else they read on the Internet with such skepticism! I'm a computer science professor, but digital photography is just a hobby. I won't pretend to be an authority on most of these topics; they're just things I've picked up over the past few years.
My interest in photography began when I was a teenager, but I did not pursue it as intensely as I might have liked because of the time investment and slow learning cycle associated with traditional chemical photography. When digital photography began to mature, I immediately recognized that it would remove the main obstacles to my pursuit of photography when I was younger, and I followed the field with interest. In 1999, I concluded that the new generation of 2 MP digital cameras were at a sufficient level of maturity to yield high quality images and give the photographer enough control to make things interesting.
My first digital camera was a Nikon CoolPix 950. Since then, I have owned a Sony DSC-S85, Canon D30, Canon D60, Canon S400, Canon 20D, and Canon SD500.
As a regular visitor on dpreview and (digital) camera expert among my circle of friends and family, I found that the same questions kept coming up over and over again. I wasn't aware of a single resource that answered many of these questions, so I started creating this FAQ. It has taken a good bit of my spare time, but I find that it has served a useful purpose of helping me organize both my thoughts and my links. Hopefully, it will be useful to you too.
In the most general sense, almost all of the knowledge here comes from things I've learned from scouring the web for the past few years. The entire global community of photography aficionados deserves some credit and thanks for this. The smaller communities at dpreview played a huge role in this too, first the Nikon talk forum, then the printer forum and most recently the Sony talk forum.
Many individual members of the Sony talk forum have made specific points or suggestions that have found their way into this FAQ. Shay and Ulysses come to mind immediately. The others whom I am forgetting to mention also get my thanks - and apologies.
Part of what makes the FAQ interesting for me and useful for other is, I think, that it's a living document. It's constantly evolving and contains many links that people can follow to get more information. I'm afraid that it would lose some of this appeal in printed form.
I no longer own a Sony camera and many of the questions and answers in that section were becoming outdated. I removed a few very outdated parts, then generalized the remaining questions and transferred them to the more general parts of the FAQ.
Here are some general guidelines for taking good portraits:
OK, nobody asked this one, but I couldn't resist:
I'm not an attorney and would encourage you to consult one if this is a serious concern for you. The following site does attempt to address some of these questions (be sure to check the links at the bottom too): Travel Photography and The Law.
Some other references:
It sounds like what you want is a neutral density filter. This makes everything a little darker without affecting the colors or the polarization of the light.
Try a polarizing filter, also called a polarizer. If you have an SLR which uses phase detection for autofocus (most do), then you'll need to get a circular polarizer to avoid conflicts with your autofocus mechanism. I haven't tried it myself, but a circular polarizer probably is not necessary with non-SLR digital cameras.
Try a polarizing filter, which can also be used to reduce glare. See above.
A very large number of them (but not all) can be downloaded from Henry's. Henry's seems to have stopped updating this page. Your manufacturer's web page is another good resource.
SLRs typically use phase detection autofocus. This Scientific American article, Focusing in a Flash, makes an attempt to describe phase detection AF, but isn't all that clear. The basic idea is as follows: The AF system grabs strips of image from opposite sides of the lens that nevertheless project onto the same area in the focal plane. (This is typically done by using a half silvered reflex mirror and some optics behind the mirror.) When the overall image is out of focus, these two strips will be shifted in opposite directions, much like an old split prism viewfinder. When the overall image is in focus, the image captured by the two strips will be identical. The AF mechanism has two (or more) separate sensors corresponding to different parts of the lens. The lens is adjusted until these two images are the same, and the overall image is then presumed to be in focus.
Non SLR digital cameras typically use contrast detection AF. While typically slower and less accurate than phase detection, contrast detection is cheaper and simpler. It requires no additional lenses and uses the main sensor only. Contrast detection adjusts the lens until it finds the position that maximizes the contrast measured in a (weighted) region of the main sensor, under the assumption that maximum contrast implies sharpest focus. Contrast detection will hunt around until it finds the point that maximizes contrast.
With adequate light and a well calibrated lens, a phase detection system can, in principle, compare the two images and estimate the direction and amount to move the lens mechanism to achieve accurate focus. In comparison, contrast detection has no way of knowing the maximum achievable contrast in the scene a priori, or determining the direction in which this is achieved. This explains the greater potential of phase detection for fast focusing. (Note that I have described these different mechanisms in general terms. Different manufacturers will undoubtedly have embellishments and improvements on this.)The exposure for a shot determines the amount of light that strikes the film or sensor. There are two variables the control this, the aperture and the shutter speed. These adjustments are required because no film or electronic sensor has yet been developed that can capture the full range of light intensities to which the eye responds. Of course, our eyes have help too. We have pupils which constrict in bright light and dilate in low light.
So, why don't our eyes ever expose things incorrectly? Our pupils tend to adjust to whatever we're focusing on, so we automatically compensate as our gaze moves. (Obviously, a camera can't do this since it must use a single exposure for the entire scene.) However, it is possible to get your eyes to expose things incorrectly: Have one of your friends stand with his back to a very brightly illuminated window in an otherwise dark room. Take a few steps back and try to concentrate on your friend's face. It should look dark to you and you may have trouble making out his or her facial expressions. The reason is that your eye is being tricked by the bright background.
Some experienced photographers can judge exposure accurately simply by looking at the scene. In fact, in the days before light meters, this was the only way to do it. Handheld light meters were the next step, allowing accurate measurements of the light levels for an entire scene or for individual subjects. The metered light level, measured in EV, could then be matched against an exposure table to find aperture and shutter speeds appropriate for the shot.
Modern cameras have light meters built in to the camera. They can automatically select both aperture and shutter speed for you, or you can pick one and let the camera pick the other. Most also offer some kind of fully manual mode, where the exposure meter can still be used to provide guidance on how the camera estimates the scene should be exposed.
Your camera has a built in light meter, which it uses the measure the amount of light in the scene at various places. The position of the light meter will vary from model to model. Some popular positions in SLRs are behind the (partially reflective) reflex mirror, or in the pentaprism. On some digital cameras, the main sensor may serve double duty as a light meter.
Your camera will determine the light level in the scene and then use an electronic version of an exposure table to pick the appropriate shutter speed and aperture. You may notice that there are multiple combinations that will be suitable for any EV. To the extent that it is possible, cameras will typically try to pick shutter speeds that are compatible with handheld shooting. The more advanced ones will even take the focal length of your lens into account and try to pick higher shutter speeds if needed to reduce the effects of camera movement.
One thing you should realize now is that with multiple possible combinations of shutter speed and aperture for any exposure level, a one-size-fits-all solution that always chooses one of these many possible combinations can't be right for every situation. This is why more advanced photographers tend to use aperture and shutter priority modes instead of fully automatic.
Aperture priority mode is similar to fully automatic mode, except that you pick the aperture value. Metering works the same way, but with the aperture fixed there is exactly one shutter speed that will provide the correct exposure in the exposure table. This is what the camera picks for you.
There are many reasons for using aperture priority, including:
Shutter priority mode is similar to fully automatic mode, except that you pick the shutter speed. Metering works the same way, but with the shutter speed fixed there is exactly one aperture that will provide the correct exposure in the exposure table. This is what the camera picks for you.
There are many reasons for using shutter priority, including:
This depends upon a number of factors including the focal length, the steadiness of your hands, and vibrations caused by the mechanical parts of your camera, e.g., mirror slap in SLRs. If your lens has a (35 mm equivalent) focal length of X mm, then a good rule of thumb is to shoot at 1/X or faster. Small movements of the camera shift the image more at long focal lengths.
Most cameras will have some subset of the following metering modes: spot, center weighted average (sometimes just called "average", and multi-segment (sometimes called "evaluative").
Spot metering is the easiest to understand: The camera meters only a small area in the center of the frame. This mode is useful if there is a particular area of the frame that you must expose properly, even if it comes at the expense of overexposing or underexposing the rest of the image. Spot metering can be tricky to use properly. If the metered area is quite small, tiny camera movements can have dramatic effects on the metering, making it tricky to get the desired exposure.
Center weighted average metering takes an average over the entire scene, where, as the name indicates, the average is weighted more heavily towards the center. This implicitly makes the assumption that the center is the most important part of the image, but that you don't want to completely ignore the edges of the image either. If implemented properly, this metering mode usually works pretty well. Moreover, with some practice, it will be relatively easy to predict when it will fail and to compensate.
Evaluative metering is the most complex metering method. It samples multiple areas of the frame and tries to come up with a good exposure value that takes all of these areas into account. This can be implemented in varying degrees of sophistication. For example, one implementation might notice two dark blobs with a bright blob in the center, conclude that you are tying to take a picture of two people with backlighting, and adjust the exposure for the people and not the bright background. Such methods can seem to work miraculously when implemented well. The only downside is that they can sometimes outsmart the photographer, making some incorrect assumptions about the effect the photographer is trying to achieve. Thus, some photographers prefer center weighted averaging because they find it more predictable.
This is a way of telling your camera to expose the scene in a slightly different manner from the way the scene was metered. Compensation is usually expressed in terms of the number of stops of compensation and most cameras have the ability to compensate at least between -2 and +2.
Here's an example of how this works: Suppose you dial in +1 compensation. This means that you want the scene to be one stop brighter, which will require a wider aperture, longer exposure, or some combination of these two. If you are using aperture priority mode, your camera will keep the same aperture, but double the exposure time (half the shutter speed). Dialing in a negative value will give you darker images and shorter exposure times (and/or narrower apertures).
It's important to understand that exposure compensation does not change the characteristics of your film or sensor; it's just a way of dealing with situations where the metered exposure isn't what you want.
An SLR is a Single Lens Reflex camera. The single lens part refers to the fact that it is using a single lens for capturing images and displaying on the viewfinder. The reflex part refers to the use of some mechanism for reflecting light towards the viewfinder.
An SLR is a preferred style of camera because it allows the photographer to see exactly what will be captured by the film or sensor without any parallax or distortion. For this reason, high end features have been incorporated into these cameras and the style of camera is sometimes confused with the high end features that go along with it.
Some common misconceptions about SLRs:
For timer shots, many cameras, focus when you press shutter button, not when the timer goes off. Thus, the camera focuses on whatever is in the center of the frame when you press the shutter and will fail to focus on you after you've moved into position in the center of the frame. The workaround is to point the camera at something else which is roughly the same distance from where you plan to be after you move into position. Press the shutter, then quickly adjust the composition and move into place.
In general, high ISO is used in conditions where it is not possible to achieve a fast enough shutter speed with low ISO. Typically, the reason for desiring a faster shutter speed is to avoid blur from motion - either from the camera shake or subject motion. Situations that might require high ISO would include:
Another reason for increasing ISO is to extend flash range. The higher sensitivity will allow you to a less powerful flash for longer distances.
Shop around. I like using pricegrabber to get prices from a large number of retailers at once. Pay careful attention to the merchant ratings column! Also check reselleratings.com.
FWIW, I've noticed that Canoga Camera tends to have very competitive prices on Canon lenses. B & H Photo seems to be well liked be camera by camera aficionados for their good prices and good return policies. Note that some have complained about uneven service and poor packaging, so be sure to read the most recent merchant ratings.
You should definitely read the Camera Confidential PC World article on buying a camera from discount online sellers.
Check their ratings at pricegrabber. You'll find that the least expensive merchants often do one or more of the following tricks:
Check out this posting from a guy who took pictures of some of the Brooklyn store fronts associated with the some of the low cost merchants. It does not inspire confidence.
Not necessarily. A person who has taken some good photos has demonstrated some skill with photography and most likely knows more than a person randomly selected off the street. However, a good photographer will figure out how to take some good photographs with almost any gear. Consumers should be interested in cameras that it will make it easy for them to take good photos in the widest variety of situations. Just because a person can take good photos, doesn't mean that he has insight into this question. Indeed, he may lack insight because he has become so skilled at working around limitations that he has forgotten what issues concern a beginner.
People sometimes comment that only serious photographers need especially good cameras. This was certainly true when "good" meant rugged. If good means that it can be used in a wide variety of situations without heroic efforts or special skill, then beginners need good cameras even more than pros.
Don't be silly. First off, the camera market is extremely competitive and with the exception of esoteric, high end gear, it's not reasonable to expect that any one manufacturer will maintain a consistent edge in consumer gear for many years without being challenged in some way. People who say, "Always buy brand X," are generally people who don't know what they're talking about but bully others into following their lead becasue they're good at sounding very confident when they speak.
Second, digital cameras are a new ballgame in countless ways. Critical parts of the camera, such as the sensor, typically are not made by the same company that puts its name on the camera. For example, Sony makes the CCDs inside the cameras of many competing manufacturers. The cameras themselves are often contract manufactured (Sanyo is as big player in this market), so cameras with different labels may be coming out of the same factory. Finally, one of the most important parts of a digital camera is its electronic innards and image processing algorithms. This is a relatively new product area and there's no reason to think somebody's old prejudices about film cameras are relevant to this in any way.
My favorite place is dpreview. While they don't hit every model, the cameras they do review are covered in a thorough and objective manner, and they offer a clear statement of the strengths and weaknesses of each model. You should keep in mind that the camera market changes rapidly, and that a camera that was "highly recommended" in 2002 may not be a super performer by today's standards.
Other good sites:
The most widely cited place appears to be photodo, which is sadly out of date. photozone also has lens reviews.
A note about lens reviews: There is often significant variation between samples of lenses. Reviews typically consider just a single sample, so you may not get the full picture from reading just a single review.
See also:
The thing to remember about extended warranties is that salesman push them because they are very profitable on average. This means that on average you will lose money with extended warranties. However, there are a few cases where extended warranties can be wise decisions:
If you are seriously considering getting an extended warranty, be sure to read the fine print carefully. Don't believe that particular situations are covered based only upon a salesman's promises.
Every lens has a focal length which is a physical property of the lens. Once the lens is made, this cannot be changed. The field of view associated with a lens will be a function of the area projected by the lens that is captured by the camera. For 35mm film photography, this area is 36mm by 24mm. Note that if you change the area captured, the field of view also changes. For example, using a 50mm lens on an APS format camera yields a very different field of view than using 50mm lens on a medium format camera.
For many years, amateur photographers who used 35mm film and no other systems became accustomed to associating particular focal lengths with particular fields of view. When these photographers moved to other systems, such as medium format, or digital, it was sometimes convenient to think about lens focal lengths in terms of the equivalent field of view they offered in the more familiar 35mm film world.
Note that the 35mm equivalent focal length of a lens is simply a way of relating field of view of a lens attached to a new camera, to the field of view of a different lens attached to a more familiar camera. There is no deeper connection than this.
If you want to check the math on this, I suggest you visit Andrzej Wrotniak's site, which also has some nice tables comparing DOF for different imager sizes. I'll briefly summarize the key points you need to remember:
The effect of small sensors, such as those found in compact digital cameras, on DOF has been a source of great confusion for many. If we think about sensor size in isolation, then it shouldn't have any effect on DOF since a using a smaller sensor is just like cropping a piece of a larger sensor. However, if we want to do a comparison between different systems then we typically want to compare two images that have the same composition and size. To do this, we will need both a shorter focal length lens, and a bigger enlargement for the system with the smaller sensor. The former increases DOF, while the latter decreases it. The effect of using a shorter focal length lens dominates and you get larger DOF in this comparison. See below.
This often cited rule of thumb cannot possibly be right in general, as a simple thought experiment proves: When you have focused your lens at the hyperfocal distance, depth of field extends infinitely far back. If the 2/3 rule were true, then depth of field would need to extend infinitely far in front as well (since 1/3 of infinity is still infinity), and objects behind the camera would need to be in focus. This is obviously ridiculous, so the 2/3 rule cannot be right in general.
How is the circle of confusion (CoC or CoF) determined?For depth of field calculations, the circle of confusion (CoC) is the smallest acceptable amount of blur in the image plane. To determine the CoC, you must do the following:
It's good to understand the depth of field is not well defined without a set of viewing conditions and assumptions about what appears to be in focus. Except for the exact depth at which the camera is focused, everything in the image plane is at some smoothly varying point between perfect focus and total blur. The amount of tolerable blur in the final image determines where you make the cutoff in this continuum between blur and sharpness in the image plane. Without a set of assumptions about how we view images, there would be no way to make this cutoff since everything (except for a 0 thickness plane at exactly the focus depth) is out of focus if we look carefully enough.
For better or worse, there are fairly standardized notions of what is acceptable blur. Typical numbers are about 30 microns in a 24x36 mm (standard 35mm film) frame. If you have a lens with a depth of field scale engraved on the barrel, it was probably computed using a CoC of around 30 microns. You should understand that this is only a rough rule of thumb for what will be in focus in your final print since the lens manufacturer has no way of knowing your viewing conditions or standard of sharpness. Moreover, even if you happen to agree with the viewing assumptions made by your lens manufacturer, you may not be capturing the image on the size medium that was assumed when the lens was made. For example, if you are using a lens with depth of field engravings intended for 35mm film on a camera that has a smaller sensor, the CoC should be reduced (leading to shallower DOF) because a larger magnification is needed to produce the same sized print. (Note that this does not contradict the fact that smaller sensors have more DOF for a given perspective, composition, and f/stop, since achieving these requires the use of a smaller focal length in comparison to a larger sensor.)
Some additional reading:Diffraction is an optical effect that occurs when light passes through a very small opening. Instead of producing a bright, clear image on the other side, it produces a blurry, disc shaped image. (Click here for a more detailed description of the physics behind this.)
Diffraction can reduce the quality of your images when you use very small apertures. Many people think that images always get sharper as you decrease aperture size. This is true up to a point. Beyond this point, they start to get softer due to diffraction.
Probably not. This effect is called vignetting and it is common in consumer quality lenses. If the effect is not equal in all corners, then something may be misaligned and you should return your camera or lens for service.
Some models have been particularly prone to uneven vignetting. For example, a batch of Sony DSC-F707 models had uneven vignetting on their left side, leading to the acronym DLSS (dark left side syndrome). Many owners with this problem ultimately returned their cameras to Sony for service.
From what I have heard, Zeiss people were involved in the design of Sony's Zeiss lenses and they are involved in the manufacture of the lenses too, though the lenses are made in Japan. The extent of the involvement is not clear. Here's a letter somebody received from Zeiss on the topic.
The relationship between Panasonic and Leica seems somewhat different. It is described as a collaboration, although Panasonic's version of events seems to suggest Panasonic engineering and Leica enforcement of quality standards. (Thanks, Diego!)
If you have information about the operation of other Japanese-German partnerships, please let me know.
You are most likely seeing classic chromatic aberration (CA), the details of which are explained in beautiful detail by Van Walree. (See also HyperPhysics or Wikipedia.) In short, chromatic aberration results from the fact that different wavelengths of light refract slightly differently when passing through your lens. This causes some wavelengths to be misfocused, have different magnification and/or get shifted laterally in your image.
See also: How do I correct for chromatic aberration (CA)?
If you don't know what I'm talking about, see: What are those red and green color fringes in my images?
With the lens you currently have, you can reduce CA by stopping down. You may also find that some focal lengths are more prone to CA than others. For zooms, CA is typically worst at one or both of the extreme ends of the range.
Another way to avoid CA is to switch to a different lens. In general, extreme wide angle and extreme telephoto lenses are more prone to CA than "normal" lenses. Zooms are typically more prone to CA than fixed focal length lenses. Lenses with exotic elements (fluorite, high index or low dispersion glass) are less prone to CA. Such lenses are often labeled as "UD" or "ED" lenses. Apochromatic lenses also minimize CA. These are usually labeled as "APO" lenses.
There are several advantages to an external flash:
Red-eye is caused by light from your flash bouncing off of your subject's retina. So, how do you minimize it? There are several approaches:
See tips below on removing red-eye in software.
Some references on red-eye:
If you're wondering about the technical details of how flashes charge and fire, the duration of the flash light, how the effective distance of a flash is calculated, etc., then you must check out Toomas Tam's excellent flash FAQ.
There are several reasons for this. The first is that your flash may be too powerful for shots at such close range. If you have a way to reduce the power on your flash, try this. The second issue is that your flash may not be angled properly to fully illuminate objects at such close range. Your lens may even be blocking some of the light from the flash. You should consider using a diffuser or getting a ring flash, which is a special donut-shaped flash unit that you attach to your camera by screwing it onto the filter threads of the your lens.
Probably, but depending upon your camera's support for standard external flashes, it may require some persistence and some workarounds:
A slave flash is a flash that is triggered by another master flash. You may have noticed event photographers with assistants carrying flashes on poles. These are slave flashes which are triggered by a primary flash on the photographer's camera.
The typical reason for using a slave flash is to illuminate your subject more evenly by providing flash light from multiple sources. There are several approaches to triggering a slave flash. Some have sensors that monitor for a primary burst of flash light and then respond with their own flash. Others are controlled by radio signals from the primary flash.
You need to put some thought and research into the type of slave flash that will be best for you. In principle, slave flashes with sensors are the most versatile and will work with any flash system. However, there is a complication: Some slave flashes can be triggered prematurely be red-eye reduction systems or by preflash metering systems, Canon's E-TTL for example. Before purchasing a slave flash system, it would be wise to check with other users of your primary flash to see what types are compatible.
If you are using your camera in auto mode, it may be choosing a shutter speed that is too low. (This was a definite problem with some earlier Sony models.) This will allow ambient incandescent light in the room to influence the image. Incandescent light is much more yellow than the flash light for which the camera's white balance algorithms are compensating. Thus, images will look more and more red as distance from the flash increases.
You also may be having white balance issues. Check to make sure that your camera is set for the correct white balance, or try adjusting it manually.
One workaround is to use shutter priority mode with a 1/60 or 1/100 shutter speed. This helped improve flash color on some earlier Sony models.
When used in auto mode, some cameras select a shutter speed that is too low for most flash photography. You should consider using shutter priority with 1/60 or 1/100 shutter speed instead if you getting shutter speeds below 1/60.
You may also be attempting to use flash in aperture priority mode. See below...
With many cameras, the camera assumes that you are using fill flash in aperture priority mode. Thus, it picks the correct exposure for the available light and fires the flash merely to fill in shadow areas. On some cameras, you may have workarounds for this:
Unfortunately, for some cameras, such as earlier Sony models, there is no workaround.
You may have a white balance problem. Make sure that you aren't using indoor film, or an indoor white balance setting.
In the case of a Sony DSC-F707, you may have a defective camera that suffers from a problem called Blue Flash Syndrome, or BFS. For more, read here.
A related, but different, problem low called EV BFS (LEVBFS) also afflicted some Sony cameras. The only affects flash white balance in shutter priority mode, and resulted in shots that were too blue. There is no workaround because flash white balance overrides manual white balance on these cameras. Note that if you think your flash shots are too red in auto mode, you may might prefer the bluer cast that results in shutter priority. Things typically get bluer as the shutter speed gets faster, so start with 1/60 and move up.
If you are shooting in auto ISO, your camera may be boosting the ISO to compensate for an underpowered flash. You can try manually forcing the ISO to a lower value, but if the underlying problem is not enough flash power, then you'll get less noisy, but underexposed shots. You might consider getting a more powerful external flash.
Here is an excellent description of how Shay Stephens built an ad-hoc diffuser for his DSC-F707. The basic idea may be generalized to other cameras. (Shay appears to have removed the pictures, but the instructions are still pretty clear.)
Note that when building a diffuser you want to be careful to make sure that you don't accidentally direct energy towards the flash or camera itself, which can cause overheating and damage. Also, be sure to avoid using any materials in your diffuser that could be damaged or could burn when exposed to sudden bursts of heat.
Recent Canon SLRs use a system called E-TTL to determine flash exposure. (Newer models use E-TTL II, which seems to be more robust.) E-TTL is praised by some for allowing very precise control over flash exposure when used properly, but it is slammed by others for being too sensitive to very small changes in the center of the image and requiring too much planning to get proper "automatic" exposure. There is probably some truth in both positions.
Some people are so flummoxed by E-TTL that they prefer to use non-Canon, non-TTL flash units that control exposure with a sensor on the flash.
More resources on Canon flash systems:
E-TTL II is an improvement over E-TTL that is intended to make automatic flash exposure more robust. It includes improved algorithms and the ability to incorporate distance information from compatible lenses.
More on E-TTL II:
For technical issues, my favorites are:
Some other worthwhile sites:
See also Where can I find good camera reviews?
Here are some of my favorites:
If you upload your photos to a photo hosting site, the site will create the gallery for you. If you want to host your own site, then you'll need to find an ISP that provides web space. Once you've done this, you can either learn html and make the gallery yourself, or get some help from a program. Photoshop has a pretty flexible tool built in for doing this. Some other tools:
Avoid the photo gallery tools built in to MGI Photosuite. They make galleries that are incompatible with Unix servers and they use GIFs instead of JPEGs for their thumbnail pictures.
Even if you tell Photoshop not to alter your images when creating the gallery, it recompresses them with high compression when copying them over to the images directory for your gallery. The workaround is to copy and paste your originals into the image folder that photoshop creates for your new gallery.
I have been quite happy with pbase because it is non-commercial (in the sense that it has no ads and doesn't try to sell mugs, prints or other garbage using your photos), and because it contains no language suggesting that you are sacrificing any rights to your photos. If you read the fine print at nearly every other site, you will realize that you are giving up some rights to your photos when you upload them. In my opinion, the fees are reasonable and it is a better deal than sites than bombard you with ads and/or whittle away your property rights through weird loopholes in their terms of service.
Note that smugmug has recently grown to be quite popular because they have no ads and low prices.
Some other sites:
I've stopped adding to this list because I've become aware of a a great Guide To Online Photo Albums which contains detailed info on various sites including their fees and features.
In a word, no. Many of these sites have gone out of business. You should always keep multiple backup copies of your cherished photos. To be extra-careful, you should probably use several types of media and store them in different locations. You should also check their integrity regularly. CD-R media and Zip media don't last as long as you might think: low tens of years at best.
Click on rules/help in one of the forum pages at dpreview and read the part about embedding images. The only tricky part is getting the appropriate URL if you are using an image hosting site like pbase, where you'll need to read the instructions appropriate for your site. In the case of pbase, you'll need to make a contribution to permit external linking.
Don't do something silly like using a URL for a page that displays many other things in addition to your photo. Obviously, this won't work. Also, remember to use the preview button before posting. There is no reason to post a, "This is a test to see if my image shows up," message on dpreview. You can use the preview function to debug your embedding technique.
The most powerful program for editing photos is Adobe's Photoshop. Unfortunately, it's very expensive. The good news is that there is a less expensive version called Photoshop Elements which has nearly all of features of the full version is can be downloaded for a free trial. Avoid PhotoDeluxe. It costs a little less than Photoshop elements, but has much fewer features.
An increasingly intriguing alternative is the The Gimp. The Gimp is a freeware alternative to Photoshop that runs on Windows, Unix and MacOS X. (MacOS info here.) It rivals the power the flexibility of Photoshop, but doesn't cost a cent. There is extensive on-line documentation available too. So what's the catch? This is a complex program and the Windows version may not be completely bug free. The user interface is also very Unix (X11) like, so it might take some getting used to. Still, I think it's worth downloading because it establishes a common denominator, a free cross-platform solution for which people can swap image editing tips. Note: If you having trouble finding somewhere to download macgimp, try osxgnu.
Some other programs:
There is also a large variety of programs and plug-ins that perform very specific manipulations on images:
The short answer: If you are editing the file and plan to reopen it in your image editing program later, you should save it in a lossless format such as TIFF, or one of the other lossless formats supported by your image editor. (See also the discussion of JPEG vs. TIFF below.) If you aren't planning on changing the image at all., you should just leave it in whatever format the camera produced. If you've made some changes and don't expect to do any further edits, you should save in whatever format is appropriate for the final use of the image. (I always save in Photoshop native format just in case.)
The longer answer: There seems to be something of a folklore about lossy compression that defies all common sense and reason. Some people seem to think that resaving an image in a lossless format such as TIFF will somehow improve the quality of the image. This couldn't be further from the truth or from the very meaning of lossless.
Let's start from the beginning: When your image is read off your sensor and processed by your camera, it is represented as a bunch of numbers. To store the file in a format that your computer recognizes, the camera can use 8 bits (= one byte) for each of the red, green, and blue channels used to describe each pixel. Thus, the total size of an image file destined for your computer will be (in bytes) HxVx3, where H and V are the horizontal and vertical resolution, respectively. Notice that this will lead to very large files: A mere 2048x1536 image would require 9 megabytes. Clearly, some form of compression would be very desirable.
Your initial intuitions about compression would probably be to run something like zip on the file to squeeze it down. This would certainly help, but it wouldn't get the file down to a reasonable size for most users. Compression methods like zip, rarely reduce file sizes by more than 50%, and this would still leave us with very big files. The trick is to modify the image slightly in a way that makes it more amenable to compression.
This isn't the place to go into a long discussion of how compression works, but for our purposes we can think of it as trying to find patterns in the data the can be represented much more efficiently than a simple list of all of the pixel colors and intensities. For example (and this isn't really how it works) if your image has a region of blue sky, it could be compressed by indicating the sky color, and the size of the region that has this color, instead of repeating the same numbers over and over again. Unfortunately, there are some problems with this simplistic view of things: Images rarely have regions of exactly uniform color. There are always slight fluctuations that would throw off our compression scheme. The solution involves two parts: 1) performing a mathematical transformation on the image to represent it in a more convenient manner through a set of equations (called a discrete cosine transform) and 2) smoothing over slight fluctuations. In other words, regions of the image that are very close in color and brightness will be treated as if they are the same color and brightness. This helps our compression scheme because it creates larger regions of the same color and the compression becomes more effective. (In truth, the same color and brightness aren't the criteria used. It would be more accurate to say the same low frequency fluctuations in color and brightness. If you don't know what this means, don't worry about it.)
So, what happens when we repeatedly load and save an image using JPEG compression? Each time we recompress the image, we discard a little information about subtle variations in the image and, because of the compression, we can introduce slight defects in the image. As we manipulate the image, these defects may become amplified as we do things such as increasing contrast or brightness. If we resave the image in JPEG, we run the risk of introducing a new set errors into the image and these errors will compound with each resave. In fact, even if we don't change the image, simply resaving it can compound the errors from the previous save.
The way out of this quandary is, after a round of editing, to save your images in a lossless format such as TIFF, or whatever native format your editor supports. When you are storing in a lossless format, you don't introduce extra errors every time you save and load. Of course, this format will take up much more space.
Some common myths about file formats debunked:
Sharpening is a trick to increase the apparent sharpness of an image by increasing the contrast around areas that look like edges in a photo. There are many reasons for doing this, the most obvious of which is that judicious use of sharpening can make a photo more pleasing to the eye. Image resizing also tends to soften edges, so it is often necessary to sharpen an image after resizing. Sharpening should be the last thing you do to an image since the amount and type of sharpening desired will be influenced by everything else you have done to the image. If you sharpen a full size image and then resize, you won't have the appropriate amount of sharpening for your new image size. The amount of contrast and brightness in the image will also affect the amount and type of sharpening you are using, so you should always adjust these before sharpening. The same reasoning applies to essentially every other image modification you can make.
The amount of sharpening you want to apply may also be a function of the output medium on which you choose to display your image. For example, you may want to use more sharpening if the image will be displayed on an old, fuzzy CRT, then if the image is displayed on a new LCD with crisp pixel boundaries.
Most image editing programs have something called a sharpening filter or an unsharp mask. These are a good start and you should experiment with these to develop a feel for how sharpening works. You'll probably notice that flowers tend to stand up well to image sharpening, while portraits do not. You'll also notice that images with lots of adjacent light and dark areas, such as sun shining on leaves, will get annoying white hot spots when sharpened. If your program has a sharpen edges feature, this may be a more attractive option for such images.
If you have have a recent version of Photoshop, there's a very simple and effective method you can use. First, pick sharpen from the filter menu. This may make your image look oversharpened at first. Now pick fade from the edit menu and move the slider until you're happy with the appearance of your image.
Here's a partial list of tools and tips:I don't know. I've read many discussions of this and tried many tools for it without finding anything that feels like B&W to me. My current favorite approach is DigiDaan's channel mixer method. Here are some other approaches:
These are called hot pixels. Most sensors produce them. Your camera may have some noise reduction features designed to minimize this effect, so you should first check your manual to make sure that this feature is enabled if it is available.
If you're stuck with hot pixels, you have several options. You can use a technique called dark frame subtraction. This technique, along with an in-depth explanation of the hot-pixel phenomenon can be found here in the learn section of dpreview.
There are also some program and plug-ins available for dealing with this specific issue:
Tall buildings appear to be leaning backwards because you tilted the camera up when you took the picture. Here are some tutorials and tools for fixing this:
There are many methods for dealing with noise in high ISO images. My favorite is Jes's color grain reducer. I've even done a little tutorial on cleaning up high ISO images, which uses Jes's photoshop action. I like Jes's action best because it cleans up color noise without corrupting the image. There are plenty of other approaches. In my opinion, these are inferior, but you might want to give them a try:
Some comments about noise reduction: Many people are initially infatuated with noise reduction techniques until they realize that removing noise almost always implies removing some detail from the image. Images processed with these methods will often get an unnaturally smooth look to them, with long areas of flat texture. Getting good results will require a light hand and lot of patience no matter which program you're using. My one exception to this rule is Jes's noise filter, mentioned above. I've never seen it remove detail. This conservative approach will definitely leave some noise behind.
There is a new batch of computationally intensive noise reduction techniques that have become popular in recent years. In general, these do a better job of walking the fine line between noise reduction and detail destruction:
Try Virtual Dub.
Most photo editing programs come with a few cool effects built in. You can achieve many more by mastering the options available to you in your program and combining them in creative ways. There are still other options available to you in the of standalone programs and plug-ins:
The first thing that you need to understand about digital pictures is that they don't have an inherent DPI rating. That's right, the resolution of digital images is not measured in DPI. Digital images are collections of pixels. The DPI of an image depends entirely on how you decide to display it or print it.
But if digital images don't have any inherent DPI, why does my image editor display DPI for images? Your image editor keeps track of two things, your image and the size at it which it expects you will print the image. These are two completely separate things, so your image editor might start off thinking you want to print your image at some goofy size (usually the result of assuming you want to print at 72 DPI, which is the presumed resolution of most monitors) but this doesn't matter. You can change the size at which your program thinks you want to print the image, thus changing the DPI, without altering your data at all. Your image editing software will have a specific way of doing this. In Photoshop family products, you pick "image size" and make sure that the "resample image box" is not checked. This means that Photoshop won't alter your image; it will merely alter its own internal idea of how large of a print you will be making from your image. You can make this change by either changing the DPI or by changing one of the dimensions of your image. All of these numbers are linked together, so you need only change one. (Shrinking the print size of the image increases the DPI because you're squeezing the same number of pixels into less physical space.)
If you want to change the number of pixels in the image, then you should resample the image in some way. In Photoshop derivatives, this means checking the resample box. Typically you will do this when you want to shrink an image for display on the screen. However, there are times when you may want to resample an image for printing. This would be necessary when the DPI become unacceptably low (below 150) at the output size you have selected (more below). When you are resizing an image with resampling, you can enter the new resolution of the image in pixels, or adjust the DPI. The program will assume that you wish to keep the output dimensions constant at this point. However, if you change the print size, the program will scale the pixels and DPI accordingly.
When you are change the number of pixels in an image, be sure to use a sophisticated resampling method such as bicubic resampling. Other methods may be faster, but they can create jagged looking images.
If you want to read more, or check out some sophisticated methods of upsampling to print very large images:
This information is stored in inside of your files in something called an EXIF header. See below...
If you are running Windows XP, you can view this information by examining the properties of the file in question - although this will not display all of the information stored in the EXIF header.
If you are running a different OS or want to see more of the information in the EXIF header, there are other options. My favorite is PixVue (Free). Some others:
You can also view EXIF information from inside of many image viewing programs, such as irfanview.
My favorite is irfanview by far, because it's fast, flexible and free. Some others:
Here's a great tip on getting irfanview slideshows to autorun when a CD is inserted. (Additional tip from Ulysses: You can use irfanview to genereate the slideshow list, so you don't have to enter the filenames manually.) Note: I have tried it yet, but recent versions of irfanview have, apparently, automated some of this.
I don't have many but here goes:
Check out gphoto, and don't forget about the GIMP.
In the context of digital cameras histograms refer to a histogram representation of the intensity levels in an image. See The Imaging Resource for a more detailed discussion.
There are many techniques for doing this, some of which I hope to cover in more detail at a later date. For now, keep in mind that most intermediate to advanced image editing programs have some kind of "auto levels" or "level correct" feature built in. These will often do a decent job of guessing what the correct color and contrast should be. Some programs, shuch as Photoshop Elements, have a specific tool that allows you to do after-the-fact white balance setting by click on neutral objects in the image. For more discussion and software tools for color balance adjustment:
These are optical effects caused by your lens. Reducing these effects typically requires producing a larger, heavier lens with more elements, so what you are seeing is a tradeoff of cost and weight against image quality.
There are web sites and software tools that will help you correct for this:
You have many options for fixing red-eye in software. Many image editing programs come with features designed to facilitate this, so check your manual. PhotoDeluxe and Photoshop Elements both have built-in features for this. Ironically, full Photoshop (this Swiss Army chain saw) does not have these features.
The basic skills for red-eye removal are quite simple and can be done in a variety of ways (without using any special red-eye features) in most moderately sophisticated programs. Once you realize how easy it is, you may prefer to do it yourself because you'll have more control. Here's what you need to do (one eye at a time):
If you master these simple steps, you can take out the red-eye from most shots in about a minute. Here are some more references:
In Windows ME, 2000 or XP select view and then thumbnails.
Many people incorrectly believe that this cannot be done in Windows 98. It just requires an extra step. (If you have a really old version of Windows 98, you might need to download some of Microsoft's updates first.) Open the properties dialog for the folder for which you wish to display thumbnails (right click, then select properties, or highlight the folder, then do file->properties). Now check enable thumbnail view, and proceed as above for newer versions of Windows. Note that you may need to close and restart Explorer, or even restart Windows, for this to work the first time.
A typical image editor will decompress and recompress a JPEG file when you save it, resulting in a loss of image quality. You can avoid this if you use special software that can do a very limited set of operations that don't involve recompressing, i.e., they just rearrange the data already in the file. You are typically limited to rotations at multiples of 90 degrees and some cropping actions:
There are many others than I haven't listed. (See here for more.) Of course, I haven't tried all of these and some may be better than others. Some advice: Read the instructions carefully to make sure that you're actually doing what you think you're doing. In some of these programs, you will need to follow special steps to ensure a lossless operation. Finally, remember that here "lossless" means that no further degradation will occur. Some image quality was lost on the first save to jpeg and nothing can ever recover this.
Resources abound for this including (gasp!) the manual. Here are a few:
If you don't know what I'm talking about, see: What are those red and green color fringes in my images?
You can partially correct for chromatic aberration using software tools. If you are shooting in RAW, then there's a very nice interface to this through the Photoshop CS RAW converter, and now through CS2 even without the RAW converter.
If you aren't using a recent version of Photoshop then it can be done using using Picture Window Pro, as described by Norman Koren. Another alternative is to use panorama tools. See tutorial 1, or tutorial 2.
I started by indicating that one could only partially correct for CA. Why don't these techniques fully correct the problem? There are two reasons:
See also: How do I avoid chromatic aberration (CA)?
If you don't know what I'm talking about, see: What is that purple fringe around high contrast areas in my photos?
There is no known technique for correcting for purple fringe. However, you can hide it. The basic idea is to selectively desaturate the regions and color ranges involved. This will have the effect of replacing the purple fringe with a gray fringe, which is often far less objectionable. If you're facile with image editing software, you can figure this out for yourself. Otherwise, you might benefit from Shay's color fringe reducer.
See also: How do I avoid purple fringing?
Some cameras, e.g. some Canon models, come with software that lets you snap images and adjust settings through a software user interface on you computer. For other models you may need to acquire additional software, sometimes from a third party. Newer cameras are increasingly using a standard called PTP, which should allow the camera to be controlled through a standard protocol. As with many standards, however, not everything is entirely standardized.
Here are some third party applications for controlling your camera:
Color is the perception of the power spectrum that is striking our retinas. Cells called cones are responsible for our perception of color. There are three distinct types of cones and each type covers a different region of the spectrum. The cones are typically labeled as red, green, and blue, which gives the incorrect impression that each responds to a distinct frequency. In fact, there is significant overlap between them, especially in the case of red and green.
Since the stimulation level of these three different types of cone cells determines our perception of color, we can also talk about color mathematically as a point in a 3-dimensional space.
A power spectrum is the distribution of light energy at different wavelengths. This can be viewed as a graph where the vertical axis is energy and the horizontal axis is wavelength.
Not exactly. We have 3 types of cone cells that contribute to our perception of color, so all color can be described in terms of the relative stimulation of these cells. However, the physical phenomena that cause our percpetion of color are not necessarily composed of three primary colors.
How do we reconcile these two thoughts? Let's consider the primary colors green and red. We can talk about pure green light, which is typically around 510 nm, and pure red light, which is typically around 650 nm. If we mix these together in approximately equal proportions, we will get the perception of yellow. However, the perception of yellow, is different from the physical phenomenon of pure yellow light, which happens to have a wavelength of about 580 nm, and which also happens to have the perception of yellow. (Not surprisingly, 580 this is the average of 510 and 650.) There are infinitely many combinations of color that can cause the perception of yellow light without including any actual yellow light in the mix, but there is also such a thing as pure yellow light.
Most the time, we don't need to worry about this difference. If we perceive yellow, it typically doesn't matter how the perception of yellow came about. Some times when the difference can matter would be if we're thinking about some special problems that arise in color filtering or chromatic aberration correction.
In any colorspace, the colors you can express are circumcscribed by the convex hull of the primaries. (This is why color gamuts for 3-color devices are triangular.) Therefore, to get the widest possible gamut with just three colors, you need color sources that are as pure as possible and as close to the extreme points of the gamut perceivable by humans as possible. The best known way to do this is to use lasers. You can read more about this in Gary Starkweather's sRGB white paper.
So, how practical is this? At least one company, Nitor, is working on laser displays for end users. However, I don't know how close this is to a product, or if they are using lasers pure enough to produce the widest gamut theoretically possible. (While their approach might permit this in principle, I would guess that any real device they sold might not achieve this since it might require some exotic lasers and there are few sources of such saturated images available for display.)
Symbol Technologies and Siemens are also working on laser displays for a different reason: small size and power consumption.
There's a wealth of information over about sRGB here. Download and save the good stuff because this site may not be around for long.
There are two likely causes:
Note that if you take two pictures of the same scene, one in Adobe RGB, and the other in sRGB, the two images should look identical on most monitors when viewed in a colorspace aware application. The reason is that when properly displayed, both images should provide a realistic reproduction of the colors in the scene (up to the limitations of you camera and monitor). The main difference between the two images with be the mathematical representation of the colors. The Adobe RGB file may also contain some additional, highly saturated colors that are not in the sRGB file. However, your monitor probably will not be able to display these colors, and you would need to print your files on a printer with a wide color gamut to see the difference.
Some cameras may change the color tone of images based upon the selected colorspace. This is incorrect behavior, since the colorpsace is a separate issue from the color tone. If your camera does this, you should view it as a defect.
Many people say that if you need to ask, the answer is: sRGB.
Adobe RGB offers some advantages, but also some pitfalls if you don't know what you're doing. Here are the advantages:
Here are some of the pitfalls:
This is covered in Poynton's Color Faq and some code fragments are available here.
Most printers are physically able to display a wider gamut than sRGB. This is one of the main reasons why technologies such as Print Image Matching (PIM), the follow on Exif Print exist. For an example comparison of printer gamut vs. sRGB gamut, check out page 7 of this document from Epson.
The main problem in displaying colors outisde of the sRGB gamut with your printer is software, but this has been getting easier with improved print drivers, printer firmware, and plugins that support PIM and/or Exifprint. For example, print drivers for some recent HP printers allow you to indicate the colorspace of your document. Alternatively, you can try to get a custom profile for printer/ink/paper combination, and use an ICC profile workflow.
Beware of this shootsmarter article, which gives the impression that printers have smaller gamuts than sRGB, and implies that there is little benefit in trying to print colors outside of sRGB (as of 6/05). As you can see from the PIM and Epson documents above, printers can cover a larger volume than sRGB. (In fairness to the Shootsmarter guys, I'll note that many ICC profiles are not as wide as what Epson shows on in their linked documents. However, nearly all ICC profiles for good printers do include colors outside of sRGB.)
Monitors are known for relatively narrow gamuts, but you can get monitors designed for wider gamuts - if you're willing to pay. For example this Mitsubishi model covers most of the Adobe RGB gamut. See also the laser display section of the FAQ above.
There are several reasons why digital photography may be preferred over film:
Despite technological advances, a few negatives remain: You may make a larger initial investment to get a digital camera instead of a film camera, and your cost per print may be higher - especially if you are printing on an inkjet printer at home. However, this is often offset by the fact that you save money by only printing the shots that you really want. Finally, digital can be less forgiving than film in some ways. For example, inexpensive digital cameras have somewhat less lattitude than film.
First, we need to think clearly about what it means to be a good photographer. This entails having the artistic sense to understand and create good compositions, the equipment to capture the image, and the know-how to control the equipment properly.
Simply purchasing a digtal camera will not impove your composition skills or increase your know-how. However, there are at least three ways that digital camera technology can help you:
Yes! (more to follow)
Many new digital cameras offer several different settings for applying sharpening to images in-camera. Which one should you use? If you have a sophisticated image editing program, you should use the lowest possible sharpening setting. Why? The correct amount of sharpening to apply will depend upon the dispay method used for the image (printing, LCD, CRT, etc.) as well as the final size of the image and the content of the image itself. Sharpening makes many irreversible changes to an image, so you can paint yourself into a corner by committing to a particular amount of sharpening before the image ever leaves the camera. You're much better off using as little sharpening as possible in-camera and then applying the appropriate amount of sharpening after you have loaded the image into your PC.
Sharpening accentuates noise. If you are concerned about noise in your shots, you should use low sharpening in your camera and be sure to sharpen conservatively in post processing. You may want to sharpen selectively, such as applying some sharpening to your subject's eyes only, or investigate software tools that try to sharpen crisp edges only, while leaving noisy areas unaltered.
Some printer manufacturers (e.g. HP) give you the option of applying sharpening in the print driver. You should experiment with this. In principle, this approach should allow you to start with an unsharpened image and then let the print driver decide on the amount of sharpening that is required for the output size and medium.
Check the manual to see if your camera has the ability to adjust the brightness and contrast of the LCD display. If this doesn't work for you, there are several manufacturers of hoods which cover the LCD partly, blocking out sunlight: Hoodman
Most of todays digital cameras are slower focusing than today's film cameras. Moreover, it takes some time for the electronics inside your camera to do their thing. You can find detailed timing information in the reviews at dpreview.
In recent years, moderately priced digital cameras have made great progress in reducing shutter lag. You should not get a new camera (in 2005 or later) with significant shutter lag.
Some cameras have the ability to focus continuously and snap a picture as soon as you press the shutter. This reduces the time spent focusing. Check your manual to see if your camera has this feature. Most cameras will also let you pre-focus by pressing the shutter half way to get a focus lock and then waiting to press it all of the way until you are ready.
If your camera does not have these features, but has manual focus, you can use manual focus to focus on the scene and then snap the picture when you are ready, reducing lag from focus time. You should probably use the smallest aperture that will yield acceptable shutter speeds to help compensate for any inaccuracies in manual focus.
If your camera has some kind of burst mode or continuous shooting mode, you might also consider using this as a way to capture rapidly moving objects. You'll wind up discarding a lot of shots this way, but you'll probably catch a few good ones too.
It is quite difficult to answer this question in an objective manner. The first challenge is bringing the film image to the PC so that the both digital and film images can be compared side by side. this introduces several variables: The choice of scanning hardware, and the parameters of the scanning software. The next challenge is that digital and film images fall apart in different ways at the limits.
It's probably fair to say that high end digital SLRs will produce images that rival or exceed 35mm film in most ways, except perhaps dynamic range, where Petteri Sulonen's examples demonstrates some advantages for negative (but not positive) film.
Some more discussion on this topic:
Your camera may have come with software for doing this and your camera (many Canons) may even have some features to facilitate this process. If not, you can do this crudely in Photoshop (or similar) by pasting together several shots. Be sure to use the same exposure for every one the shots you are pasting together. (Use manual mode.) There are also many programs, tutorials and tools available to help you do this:
Compact digital cameras have very small sensors, so a short focal length lens is sufficient to provide an image that will cover the entire sensor. The short focal length results in a very deep depth of field. For a more mathematical discussion of depth of field issues, see Andrzej Wrotniak's site. For a summary of how the different features of your camera interact to affect depth of field, see above.
You may have left the camera set for indoor white balance. Check your manual on how to set the camera to auto or outdoor white balance.
Some cameras are designed for this, so you should check your manual. It's rare to find this feature in high quality digital cameras, but lower quality multifunction cameras with CMOS sensors sometimes offer this. See, for example, AipTek.
If your camera doesn't already support some webcam features, you may still be able to do it if you have a video capture card (or USB device) of some kind, by connecting the video out from your camera to the video in on your PC. The results may be less than satisfactory since your camera may not be optimized for real-time video. It may have a slow frame rate or display unwanted information (such as shutter speed) on the display. Even in the best of cases, you shouldn't expect results much better than what you can get with a simple webcam. Most people don't have the bandwidth to exploit extra resolution and most videoconferencing software isn't set up for it either. You should think carefully about whether it's worth using your expensive camera as a subpar $50 webcam.
Many manufacturers offer proprietary lithium ion battery systems with their cameras. These are very sophisticated batteries with very long life. Sony's Infolithium battery system is perhaps the best example of this technology. In addition to offering very long life, this system can give an accurate measurement of the amount of operating time left on a charge. The negative side of lithium ion battery systems is that these proprietary systems tend to be very expensive.
If your camera takes standard AA batteries, then Nickel Metal Hydride (NiMH) batteries are both the most economical and longest lasting option. It may be surprising to you that these rechargeable batteries would last longer than a fresh set of alkalines, but they usually do by a large margin. NiMH has an advantage because cameras are high current drain devices and alkaline batteries have high internal resistance, which makes it hard for them to provide high current for extended periods. (If you try some in your camera, you'll be surprised by how hot they get.) Fortunately for us, NiMH batteries can provide high current without any problems.
Note that Nickel Cadmium (NiCad) batteries can also provide high current, but they don't last as long as nickel metal hydride and are more prone to memory effects.
Major manufacturers are starting to offer NiMH batteries and chargers, so you can usually find them at most electronics stores. Pay attention to the amount of energy each battery can store, measured in the odd units of milli-Amp Hours (mAH). You should look for batteries rated at least 1800 mAH. It's not uncommon to see 2300 mAH today and I've seen as high as 2500. You should also pay attention to the features of the charger. Does it have the ability to do rapid charging, or does it need overnight? Does it offer the option of conditioning the batteries too? Thomas Distributing is a interesting site with some cutting edge rechargeable battery technology and decent prices.
The Imaging Resource has a great comparison great comparison of leading NiMH batteries.
There is supposedly little or no memory effect in lithium ion batteries. With use, however, they will eventually lose their ability to retain a charge.
No.
Really.
There are many poorly thought out theories about how digital camera equipment can get damaged in airport x-ray machines, but they typically don't make much sense. One theory hypothesizes damage to flash memory, but forgets that (1) countless other devices with flash memory (cell phones, laptop computers, etc.) have been going through x-ray machines for years without incident, and (2) printed circuit boards with flash memory just like those in your camera are scanned by x-rays as part of normal quality assurance procedures for electronic products. Another theory hypothesizes that the sensor can be damaged. However, electronic sensors quite similar to the ones used in cameras are used as x-ray detectors, so bombardment with x-rays is part of the normal operation of these devices. A device not unlike your digital camera may be at the receiving end of the radiation in the x-ray machine, where it is capturing and displaying information for the technician.
It is true that exposure to massive amounds of radiation can damage your camera - and just about anything, for that matter. There is always a chance that a new generation component in your camera will be more sensitive to x-rays than previous generations of equipment, or that new x-ray machines will be dramatically more powerful, but I'm not terribly worried about it.
While some sort of convergence between still camera technology and video camera technology is inevitable, and perhaps even rapidly appraoching, digital video camera stills are currently nowhere near the quality of digital still cameras. They lag both in resolution and quality, e.g., a three megapixel still from a digital video camera will look much worse than a still from a three megapixel digital still camera.
I wish I had a clever explanation of why this is true, but I don't. All I can offer is my experience: I own a Sony DCR-PC100 and the I think the stills from it stink. You can check if things have improved with more recent video cameras (they haven't as of mid 2005) by visiting dvspot and downloading some of the still image samples.
Digital SLRs are typically 3:2, while compact cameras are typically 4:3. You will find some disagreement about which is the "best" aspect ratio. Keep in mind that traditional televisions are 4:3, while a common print size is 6" by 4", which is 3:2.
If you expect to be editing your photos later in software, then I would suggest shooting in the native format of your sensor, but keeping your intended use in mind. For example, if you expect to print in 4x6, then you should leave some extra space on top or on bottom for cropping. Shooting in the native format of your sensor gives you more information and leaves options open for different uses later.
If circumstances will force you to print or display your photos without the benefit of software editing first, then you might consider shooting in the aspect raio of your target device. The benefits of this are that your camera may provide hints to help you frame properly (by darkening parts of the LCD), and it will remove any uncertainty about which part of the photo will get cropped.
There are some disputes about how to count things, but Sony, Canon and Kodak seem to be the sales leaders, as of 2005.
This is a neat trick that can give some strange motion-blur like effects. Unfortunately, it's not possible on most digital cameras. Obviously, you can do if your camera has a manual zoom ring.
If you know of a cameras with an unexpected ability to do this, please let me know and I'll mention it here.
The most expensive single part will almost always be the sensor. Portelligent does teardown analyses of digital cameras. They used to have a free report on the older Canon S10. This might still be available by request, but is no longer available to the general public. On the older Canon S10, the sensor, which was supplied by Sony, was the most expensive part.
If you want to know how many pixels per inch you need to get good results at a particular size then check: How large can I print my digital photos?
However, the best answer to your question is that you are thinking about the problem the wrong way. With a digital camera it is very easy to crop photos and print only the part you want. If you have a high resolution camera and you are willing to crop, it's just like having a more powerful zoom lens. A 2MP camera will produce 8x10 images at 160 PPI, which is enough to produce pleasing results for many people. If you get a 4MP camera, you can crop away 30% of the image and still get a 160 PPI 8x10. This like increasing your zoom range from 3X to 4X. Don't think just in terms of how large you want to print, but in terms of how much you want to extend your zoom range.
Unless you think that you wouldn't like a zero-weight, zero-size method of incrasing the range of your zoom lens, you should always be pleased to have more resolution. (See What resolution camera do I need if I want to produce prints of a certain size? above.)
There are some practical issues to consider. While prices for storage are dropping very rapidly, you can saddle yourself with some expensive storage requirements if you buy a very high resolution camera, so be sure to budget for storage when shopping. In some cases, manufacturers will offer higher resolution models that don't offer significant improvements over lower resolution models. This will occur when manufacturers push to get a higher resolution model out before they have mastered controlling the noise produced by the smaller pixels required for the higher resolution model. Be sure to read reviews carefully to make sure that the extra resolution you are buying isn't coming at the expense of much greater image noise.
Does your camera have an orientation sensor? If your camera does, then there's a good chance that the rattle is the normal function orientation sensor. Do a few experiments tilting the camera and taking pictures to see if the sound of something moving in the camera correlates with the orientation sensor reporting a change of orientation. If they're correlated, then there's a good chance that everything is normal.
If your camera does not have an orientation sensor, there are some other possile explanations for the rattle. Lens mechanisms in some cameras have a surprising amount of play. Try holding the lens steady with one hand as you move the camera around. If the rattle goes away, then you're probably just seeing normal play in the lens mechanism.
It should go without saying that excessive shaking of your camera probably isn't a great idea. If your camera works, don't risk damaging with lots of unnecessary shaking just to satisfy your curiosity.
This is normal and it will not affect your pictures. What you are seeing is blooming, which is harder to prevent when a CCD is read in video mode than when it is read in still frame mode. (For example, it appears that Sony uses the interline transfer area as a sort of antiblooming gate when the sensor is operated in still mode. Sony spec sheets indicate that in still picture mode, the interline transfer area should be swept of a blooming signal before the captured image is read off the sensor. In video mode, the interline transfer area is most likely read concurrently with the integration of the next frame.)
For digital cameras, ISO is shorthand for the ISO 12232:1998 specification maintained by the International Organization for Standardization.
This standard specifies signal to noise ratio and brightness requirements (or saturation for cameras that are limited by well capacity) for a camera to earn a certain ISO rating. These ratings are intended to be similar to those of ISO 5800:1987, which specifies ratings for film. Thus, at a given f/stop, shutter speed, and ISO, both film and digital exposures should produce roughly the same brightness as output. Note that in practice this isn't always the case due to many factors including interpretation of the standard, different tone curves, rounding, and marketing considerations.
As with traditional film ISO ratings, increasing the ISO corresponds to an increase in sensitivity. For example, in moving from ISO 100 to ISO 200, while keeping the f/stop constant, you will achieve the same exposure by using a shutter speed twice as fast.
In practice a single camera can achieve multiple different ISO ratings by applying some form of amplification to the signal coming off the sensor. This can be done by applying analog amplification to the signal before it hits the A/D converter, or by bit shifting the results after they have gone through the A/D converter. Cameras may apply a combination of these approaches, depending upon the desired ISO. Which is best will depend upon whether amplifier noise or A/D converter noise is larger.
In the sensor section we discuss why high ISO shots have more noise.
Note that you may want to read about the the meaning of ISO for digital cameras, and why noise increases with ISO, before continuing with this answer.
Results will vary from camera to camera, but manufacturers typically choose the best combination of analog amplification and digital bit shifting, so you are unlikely to get less noise if you underexpose and push. However, if you can verify that your camera is simply bit shifting, then you might prefer to underexpose and push since this gives you greater flexibility to recover from overexposure by giving you the option of pushing less than one stop if you decide later that one stop was too much.
A more rigorous method is to hack David Coffin's RAW converter to count the number of even and odd values in your RAW file. For ISO values with analog amplification, you should see roughly equivalent numbers of even and odd values, but for ISO values that have been bit shifted, you should see almost all even values (or only values divisible by 4, etc. for further bit shifts). Note that you may see a few odd valued pixels that have been remapped.
Using this approach, one can verify that, for example, the Canon EOS 20D, does analog amplification through ISO 1600, but achieves "H" (ISO 3200) by bit shifting ISO 1600.
Some cameras will not go below 1/30 second in full auto mode, even if this means badly underexposing the picture. You can check this by putting the camera in auto mode, pointing at something very dark, and seeing what shutter speed you get. If it's something unreasonably fast, like 1/30, then your camera has this restriction on auto mode.
A workaround is to change to aperture priority mode. Keep in mind that unless you are using a tripod, you will probably see motion blur and camera shake below 1/30 second.
When you read your manual, you will most likely learn that this is a warning that the currently selected shutter speed is too low for you to have a good chance of getting a shot free of blur from camera shake.
If you don't find anything and you're still desirous of a wireless remote, AND you're sufficiently crafty, you might be able to work something out. The next step is to check if your camera can be controlled by a PC. Many recent models have this ability through the software included with the camera. You might also check out gphoto (if you're a Linux user), DSLR Remote Capture, PhotoPC, cam2pc, camctl, or other options listed on this page at Steve's Digicams.
Let's assume that you figured out how to control your camera from your PC. How do you go wireless? The last step is to get some kind of wireless presentation device designed for wireless control of powerpoint presentations. (Try googling "presentation mouse".) Connect your camera to your laptop/PC, then position the mouse cursor over the button for capturing images. Click the button on your presentation mouse, and you have a wireless shutter trigger. Of course, you will need a laptop or PC to make this work.
The story here is pretty much the same as with wireless remotes. Check your manual, then your manufacturer's web page.
You might be hoping to get a cable release, like the ones that were popular for film SLRs for many years, that screws into the shutter release button. Unfortunately, there does not appear to be a standard mechanical or electronic solution for digital cameras.
Note for Sony users: Search for the RM-DR1 or RM-VD1.
Some digital cameras will not automatically increase the gain on the LCD or EVF in dark situations. This doesn't necessarily mean that your shot will be underexposed. A workaround that helps with some cameras is to half-press the shutter, which sometimes causes a brief brightening of the LCD while the camera gets a focus lock. You can do this repeatedly if needed.
The answer depends upon why you rarely used high ISO with film. If you rarely had opportunity, need or reason to shoot in conditions that warrant high ISO, then perhaps high ISO performance really doesn't matter for you. However, if you avoided high ISO situations because of the inconvenience and quality issues associated with high ISO shooting, then you might consider if a digital camera with good high ISO performance could expand the realm of situations in which you shoot.
Purple fringing (PF) has been a subject of great debate in the digital photography community. A popular, but demonstrably incorrect, theory for some time was that PF was caused by sensor blooming. The following suggest strongly that blooming is not a factor:
Despite these arguments, some persisted in the belief that PF was caused by blooming, so I took these sample images to debunk the blooming theory. In these shots, we see that the lens is the factor that controls PF, and not the exposure. The identical exposure with two different lenses will produce vastly different amounts of PF. We also see that focus plays a major role.
So what does cause PF? One explanation is some form of chromatic aberration. While purple chromatic aberrations are possible and do occur with film, as shown in this thread, they seem to be less frequent than with digital. This suggests that there is something special about sensors that makes it more of an issue for digital cameras - perhaps an issue involving sensitivity beyond the visual spectrum or a penetration depth/angle of incidence issue. Canon's Chuck Westfall, who is typically quite reliable, suggests birefringence in the microlenses as the cause. Unfortunately, this explanation is at best incomplete. Without further detail, it leaves the demonstrated importance of the primary lens unexplained. It also begs the question of how what should be a small refraction near the surface of the sensor could cause fringing that spans many pixels.
See also:
Stopping down typically reduces purple fringing. You might also try a different lens. My experiments show that different lenses exhibit very different amounts of fringing at the same aperture.
The following comments are directed towards digital SLRs with interchangeable lenses and large sensors. I am excluding the Olympus E-10 and E-20 because they use small sensors and have a permanently attched lens (though they are still SLRs).
The main reason is the sensor. While still smaller than a 35mm film frame, digital SLRs have very large sensors. In fact, current digital SLR sensors are among the largest chips of any kind ever mass produced. Chip cost grows dramatically with chip size. Moreover, progress has been very slow in reducing chip cost per unit area. Historically, semiconductor prices have gone down and functionality has gone up as a result of shrinking feature sizes. Contrary to the hopes of many, there is little historical basis for rapid decreases in cost per unit area. The upshot: Don't expect an inexpensive full-frame digital SLR any time soon. Note: Sony's latest 6MP APS-sized sensor, which is believed to find home in the Nikon D100 digital SLR, is expected to cost $750 a piece according to this PC World Article. Current digital SLR prices suggest that significant economies have been achieved, but that large sensors remain expensive.
Secondary reasons for high digital SLR costs are the electronics. Digital SLRs typically achieve high frame rates with high resolution sensors. It requires reasonably powerful electronics to move, digitally process and compress the data at a reasonable rate. Large RAM buffers are also required to give the camera reasonable burst shot capability. All of the electronics costs money.
Some have suggested that entry level digital SLRs have artificially high prices and small (APS sized) sensors for marketing reasons. As far as I can tell, there is no factual basis for such comments.
The reason is that when you aren't snapping a picture, the mirror is reflecting the light away from the sensor and up towards the viewfinder. I'll briefly mention some approaches, real and proposed, for dealing with this issue:
SLR afficionados will be quick to point out that with an optical viewfinder, there is no need for live LCD preview. They will go on to say that an optical viewfinder is much more accurate and a more reliable indicator of proper focus than any LCD. These points are generally true, but it is also true that LCDs, especially the tilt/swivel variety, permit the photographer to operate the camera without his head pressed against it. This can be useful in some situations.
Finally, I should mention a recent (6/05) product called zigview that attaches a small camera and LCD display to the optical viewfinder of an SLR, giving you an LCD display of what your eye would see if you looked in the viewfinder.
There are two causes of this:
Higher end cameras generally take a very conservative approach to digital sharpening (i.e. apply little or none) because it's impossible to undo oversharpening and advanced users prefer to sharpen images individually, in a manner appropriate to the image, medium and final size. Many owners of higher end cameras find it difficult to tolerate the sharpening levels applied by lower end cameras once they become accustomed to having more control. The digital trickery seems to jump out of the image and looks very fake to them. However, some still prefer the convenience of the one-size-fits-all approach and aren't terribly bothered by the halos. There's no shame in being in this latter category, but it is worth understanding the difference between optical sharpness and digital enhancement so that you don't underestimate what is offered by a high end camera or overestimate what a low end camera offers.
Here's another image from Steve's Digicams of the same scene, but this one was taken with an EOS-10D. If you zoom in on the wires, you'll notice little or no halo effect. This is because the 10D (as used here with a decent, mid-priced zoom) is offering some genuine optical sharpness with very little digital enhancement. Of course, you can always add this yourself later if you want.
My opinion is that they usually need less work, although many people come to believe that digital SLR shots need more work as a result of poor workflow or tastes that are shaped by their first digital experience. Before we start on this one, please be sure you've read:
At this point, we should understand that it is best to use low sharpening when shooting and then to apply the amount of sharpening appropriate to the output medium and output size, thereby minimizing noise and artifacts from oversharpening. This is true for digital SLRs, as well as compact digital cameras.
My impression is that some people come to believe that digital SLR shots require more work because their tastes are shaped by their first digital experience with compact cameras. Many of these users do not follow a good workflow and develop a taste for whatever guess the manufacturer has made about the appropriate sharpening level for typical uses - usually heavy. When these users shift to digital SLRs, their first experiences are often with low priced zooms, which may not be very sharp to start with. The low sharpening done by most digital SLRs, combined with mediocre optics, produces results that have very different feel from that to which they are accustomed. This results in the complaint that digital SLRs "need" postprocessing, while their previous compact cameras did not. I think a more accurate statement would be that these users have become accustomed to particular set of shortcomings in unprocessed shots from their old cameras and that they are now unhappy when presented with a different set of shortcomings.
I'm not aware of any color digital camera for which proper workflow does not involve some postprocessing.
Some cameras store RAWs, TIFFs, or movies in a separate folder from where they store JPEGs. Be sure to check for subfolders in the directory that contains usually contains your JPEGs on your memory card, as well as parent folders.
This will depend upon the compression level and image size of your shots. The best way to estimate this for a particular camera is to go to a place that reviews cameras, such as dpreview, and check the review for the average file size at a particular resolution and compression level. Now divide the memory card size by the file size.
JPEG is a lossy compression method. This means that some sacrifices in image quality are made to reduce the size of the image. I give a somewhat informal description of the process in my answer above on: In what format should I save my images once I've transferred them to my computer?
A more detailed description can be found in the Compression FAQ, item 75.
With the exception of microdrives, all memory cards today use a technology called flash memory. It is the same technology that is used to store your BIOS on your computer and your preferences on your cell phone. Flash memory does not require power to retain its information, which means that your data will remain intact for many years without degradation or loss. In principle, there is a limit to the number of times you can write to flash memory, somewhere between tens and hundreds of thousands of writes, depending upon how you count, but a typical user will never come close to this in the lifespan of the product. (See this Kingston Document, p. 4.) Flash memory is reasonably resistant to magnetic fields, impacts, x-rays, and extreme temperatures (within reason). It is superior to magnetic media, i.e., disks, in these dimensions. Flash memory is susceptible to damage from the Post Office's newly installed biological decontamination machines, so avoid sending flash memory though the U.S. Postal Service if you think the postal service might be using such machines in the area to which you would send your mail.
A microdrive is actually a hard drive in the form factor of a compact flash card. This is a remarkable technical achievement that results in lower cost per byte than compact flash. The microdrive does have some drawbacks: It it slower, consumes more power, and is more susceptible to damage from extreme conditions than flash memory. It is unlikely that anybody will offer microdrive technology in a smaller form factor (memory stick, Secure Digital, etc.) in the near future. These formats are significantly smaller and thinner than compact flash and would require another significant breakthrough in miniaturization.
Compact flash is the most versatile form factor, especially if your device is Type II compatible and can, therefore, use a microdrive. This segment of the market appears to be the most price competitive and tends to support larger capacities due to the larger physical size of the medium. The compact flash approach appears to include some controller logic on the storage device. While this could potentially increase cost, in practice it has not and it seems to ensure greater compatibility between new compact flash storage cards and older devices.
Other storage formats (memory stick, MMC, smart media, etc.)
have traditionally been inferior to compact flash in every way except
size. An annoying
property
of many of these media is that they appear to be controllerless,
leading to
compatibility problems between older devices and new storage cards. For
example, my Sony DCR-PC100 will not work with 128MB memory sticks.
One change in recent years has been a shift towards secure digital (SD)
format cards. While capacities are not as high as compact flash,
they are conveniently small (some might say too small) and are
available in high speed configurations at quite competitive
prices. Some manufacturers seem to be slowly shifting towards SD
format, starting with their less expensive models. SD may
eventually replace compact flash.
Type II is larger and not all devices that use compact flash can accept type II. Check your manual. Type II is typically used for very large memory cards or microdrives.
In my opinion, it should not be a major factor for most purchasers of mid to high end digital cameras. It may be a factor at the low end for extremely price-conscious consumers. For example, a previous investment in one memory technology may give one camera an advantage over another because an additional $50 investment in more memory is a significant percentage of the cost of the camera.
High end purchasers are probably shopping on features and not small differences in price. If you're already spending $1000 on a camera, you should get the camera with the features you care about rather than short changing yourself on features to save $50-$100 on memory. Flash memory prices are quite low and an additional investment in memory should be considered merely a small increment to the price of the camera. I'm always amused by the self-righteous claims of people with loyalty to one type of memory or another. Some will even buy a $100 more expensive camera to avoid a $100 investment in a new memory technology and seem to think they are acting on some kind of principle.
In some cases, it may be desirable to buy into a memory technology that is compatible with your other devices. My experience has been that I share memory between devices far less than I would have expected.
In general, you should focus on features and total cost of ownership. The one group for which microdrive compatibility is a huge factor would be people who go on long shooting outings without access to a laptop or computer to upload images. This would include travelers on long vacations, or journalist types. If you actually take a several gigabytes of shots per outing, then buying a camera that supports a microdrive (and plenty of batteries) is probably a wise move.
Of course not! We're talking about digital data here, folks. Do your Microsoft Word files degrade when you copy them? Of course, you should be concerned about your media degrading over time. No storage medium is permanent. CD-R and Zip media will last low tens of years at best. Make multiple backups and check their integrity regularly.
If you have an older non-USB camera, then it's a no brainer. If you have an older USB 1.X camera, and your computer has a USB 2.0 connection, then you can still get a significant speedup from using an external reader. Even if your camera is USB 1.0 or USB 2.0 capable, you may still find that reading from an external reader is faster.
There may be other usability advantages to an external reader as well. Despite USB connectivity, some cameras still require you to use a special program to transfer pictures from the camera, or to go through Microsoft's clunky camera device interface. An external reader will let you use your memory card as if it were another hard drive, a potentially significant improvement in convenience.
Finally, there is less drain on your battery if you use an external reader and less chance of accidentally running down your battery. I've done this one: The camera won't power off if it's connected to the computer. I've plugged in the camera to transfer a couple of files, didn't bother to connect the AC, and forgot to disconnect it. A couple of hours later when I wanted to take some more pictures, I was really disappointed.
RAW mode contains the raw data from the sensor before any image processing algorithms have been applied. This means that it contains more bits per color channel than your typical JPEG or TIFF and, more importantly, no irreversible image transformations have been applied yet. Many of the things that your camera does before saving a file as JPEG or TIFF are hard or impossible to reverse. Even though TIFF is a lossless format, it is lossless after several irreversible transformations such as sharpening have been applied. You will be able to do extreme corrections with RAW files that are difficult or impossible to do as well starting with a JPEG or TIFF because information has been discarded already by the time you start working with a JPEG or TIFF.
An additional advantage of RAW is that it lets you benefit from improved image processing software that may exceed the capabilities of what is running on the firmware in your camera. Even if you think that your camera is doing well enough compared to the RAW conversion software available today, shooting in RAW gives you the option of stepping up to something better in the future.
RAW will typically take more space than JPEG since RAW images have little or now compression. Ideally, RAW images should be losslessly compressed, but some manufacturers apply no compression, which makes for very large files, or lossy compression, which is contrary to the spirit of RAW. RAW images from cameras will color filter array (CFA) sensors, will typically take less space than TIFFs because CFA sensors capture only one color channel per pixel.
A disadvantage of RAW is that it is a proprietary format and you will need to use a special procedure to convert RAW files into a format that most programs will understand. This situation has improved somewhat in recent years as more third party software has appeared to aid in RAW conversion.
Should you care? If you find yourself second-guessing your (or your camera's) white balance decisions, applying lots of different sharpening methods, or trying to push or pull that last bit of resolution from an image where the exposure isn't quite right, then you should be interested in RAW mode. Some people think that RAW is meant primarily for advanced users. While all users can benefit from RAW, beginners may have the most to gain since they are most likely to white balance and exposure mistakes and would most benefit from easy ways to correct these mistakes.
First you need to understand JPEG. JPEG is a lossy compression method, which means that some image quality is sacrificed to make the image file smaller. The amount of this sacrifice can be varied, though there is no agreed upon scale. Your camera probably has different JPEG modes, which correspond to different trade offs between quality and size.
TIFF is actually a family of compression methods. The standard is fairly complicated and not all programs handle all types of TIFF files. Most programs, however, can handle a simple TIFF format in which no compression of any kind is applied. This requires 8 bits of resolution for each of the red, green and blue channels, or 3 bytes per pixel. Thus, the total file size will be 3xHxV bytes, where H is the horizontal resolution and V is the vertical resolution of the image.
The advantage of TIFF is that no sacrifices in image quality are made to reduce the space requirements of the image. How significant is this? It depends on a number of factors, including the basis of comparison that you are using. Some cameras use much more compression in JPEG than others. Some owners of cameras with the option of very low JPEG compression swear that they can't tell the difference between low-compression JPEG and TIFF. I can usually see the difference on my 18.1" digital LCD: When I look carefully at regions of the image of roughly of even color, I can detect very faint, small blocky looking areas (JPEG artifacts). These would be more noticeable if you made a big enlargement.
Sensor manufacturing is similar to logic chip manufacturing, which means that the cost of the device grows dramatically with the area of the device. (Larger area, means greater chance of defects, which means high reject rate, which means higher cost. See this great Java demo for a graphical demonstration of how yields drop.) Most of the progress in microchips in the past several decades has addressed the problem of making denser chips, which means that more stuff is crammed into the same amount of space. The number of defects per unit area, has not improved dramatically. Thus, microprocessors, RAM and sensors have stayed more or else the same physical size over the past few decades even as the amount of content per unit area has gown exponentially.
Smaller chips allow camera manufacturers to enjoy other economies: They can use smaller lenses, which reduces the cost of the optics, and it lets them make smaller, lighter cameras. Most consumers prefer smaller, lighter cameras.
Larger sensors have several advantages. They let you use larger focal length lenses, which give you more control over depth of field. Each sensor element also covers a larger area, which means that it collects more photons. A larger number of photons collected means higher signal to noise ratio (lower photon shot noise) and means that less amplification of the signal coming off the sensor is required. This yield cleaner images, especially for higher ISO levels.
CCDs have traditionally been the standard for high quality digital imaging. The idea was developed in the 1970's and has been refined to the point where CCDs have very low dark current and readout noise. They are manufactured using microchip manufacturing methods, but the process is more complicated than typical microchip manufacturing. Thus, CCDs are relatively expensive to manufacture.
CMOS is the same manufacturing process used to make most microchips, such as Pentiums or PowerPCs. The idea for CMOS sensors preceded CCDs, but the style of CMOS sensor that could be produced through the mid 1990's had problems with dark current and fixed pattern noise, relegating CMOS to cheap webcams and similar devices. Nevertheless, the motivation for developing high quality CMOS sensors was strong. A CMOS sensor is basically a DRAM and it can, in principle, be manufacture red using standard CMOS methods, making it a less expensive alternative to CCDs. CMOS also consumes less power.
In the 1990's it became practical to start adding extra transistors to CMOS pixels, creating Active Pixels (APS). Active pixels incorporate an amplifier on the pixel site, which increases the strength of the signal coming out of the pixel. Combined with techniques such as correlated double sampling (CDS) for reducing fixed pattern noise, this greatly improved the quality of CMOS sensors. The ability to add extra components to the pixels, or area surrounding the sensor, also introduces new options for improving quality and reducing speed. Pixel level A/D conversion can be used to enhance dynamic range, or on-chip A/D conversion can be done to reduce cost, as in Foveon's F19 sensor.
Manufacturers (e.g., Micron) also seem to be making progress on reducing dark current in CMOS sensors, but the details of this aren't getting much public discussion.
Canon was the first to bring high quality APS technology to consumers with the EOS D30, which was a truly groundbreaking product. The D30 incorporated several innovations from Canon in controlling noise, as described in this EE Times article. Some people have misunderstood the type of noise reduction described in this article. Nothing described in the article smooths detail across pixels, so there is no trade off between noise reduction and detail. The advances described in the article strictly improve performance with no negative side effects since the noise reduction is at the individual pixel level.
There remains some debate about how much less expensive CMOS sensors are than CCDs. Those who argue that CMOS sensors are cheaper will point out that they are simpler to manufacture and can be made using standard equipment, often equipment that has already been amortized because the line width needed for CMOS sensors is less than that of cutting edge logic chips. On the other hand, some will argue that two factors push CMOS sensor manufacturing closer to CCD costs. First, they argue that extra steps common to both processes, such as the addition of microlenses and color filters, play a large part in the cost equation, decreasing the relative cost impact of the other fabrication steps. In addition, it is argued that changes to the manufacturing process needed to reduce dark current further push the manufacturing cost of CMOS sensors closer to CCDs. Since manufacturers don't release their costs to the public, we may never know the final word on this. However, it does seem that products with CMOS sensors are typically cheaper than their CCD counterparts with equivalent sensor sizes.
CMOS sensors are rapidly dominating the digital SLR market and the best examples of CMOS technology match or exceed the best examples of CCD technology.
For more information on camera sensors:
Most sensors see the world in black and white. To get a color image from a B&W sensor, an array of filters is placed over the sensor. Typically these filters are in an alternating pattern of the primary additive colors: red, green and blue. Since the eye is more sensitive to green, green is favored in the pattern, so one row will have filters alternating between, "RGRGRGRG..." and the next row will have filters alternating between, "GBGBGBGB..." With the filters added, the sensor will now be able to detect one of either red, green or blue at each pixel. (Note that we are using language somewhat loosely here. Each filter allows a range of wavelengths to pass, with the range centered around specific red, green or blue wavelengths.) To construct the complete color information for each pixel, the sensor must reconstruct two out of the three components of the signal by using some form of interpolation based upon the colors registered at neighboring pixels. For example, an "R" pixel in the middle of an "RGRGRGRG..." row, might get its green value by taking the average the recorded green values at the two neighboring green pixels. (In fact, more complicated schemes than this are used, but the principle is the same.)
While Bayer interpolation is a clever approach that allows us to construct good images from otherwise monochromatic sensors, it is not perfect. Here we debunk some common myths about Bayer interpolation:
Foveon recently introduced a new sensor technology which they call X3. This is a fundamentally different approach to producing a color sensors. Typical approaches place a Bayer pattern mosaic of color filters over the sensor and use interpolation. The Foveon sensor detects distinct red, green, and blue signals for each pixel. How can it do this? It turns out that in the materials used to make CMOS semiconductors (doped silicon), different wavelengths of light are absorbed at different depths. Foveon layers three photodiodes on top of each other on the surface of their chip. No color filters are used. Instead, the light automatically activates the right sensor based upon its depth of penetration. Since no interpolation is required in the spatial domain, Foveon images can capture sharper images than Bayer pattern sensors with the same number of pixels in the X-Y plane. (Note that Foveon likes to count all of their photodiodes as pixels, so a 4.5MP X3 image samples 3 colors at 1.5 million locations in the X-Y plane, while a 4.5MP Bayer pattern sensor would sample 4.5 million locations in the X-Y plane, but only one color at each location.)
Further reading:
Some references:
Let's assume that we are keeping the EV constant, so the final output is the same brightness in a high ISO vs. a low ISO shot against which we are comparing. We'll also assume that we're doing analog amplification.
First, we'll consider the case where the ISO is boosted, but the shutter speed is kept constant, which implies that we have used a smaller f/stop. Since the integration time is constant, dark current accumulation will be the same in both cases, as will be readout noise and reset noise. Since less light was striking the sensor during the exposure, photon shot noise will be lower in absolute terms - since it is proportional to the square root of the signal and the signal is lower. However, the signal to noise ratio (SNR) will be worse overall since other noise source have remained constant, or shrunk more slowly than the signal. When this signal is fed through the amplifier, this has the effect of multiplying both the signal and noise by a constant. Since the SNR was worse than in the low ISO case to begin with, it will remain worse after amplification and get worse still with the addition of amplifier noise.
As a slight complication, we can consider the case where we keep the f/stop constant and use a faster shutter speed to compensate for the increased ISO. The analysis is basically the same as above with one difference: Dark current noise can actually decrease, which can give a slight improvement in SNR. In practice this effect will often be overwhelmed by the other factors decreasing SNR. The only case where it might make a difference would be for very long exposures with sensors that are prone to high dark current accumulation. (Astrophotography with crude CMOS sensors might be such a case.)
Blooming is the overflow of charge from one pixel to another. It should not be confused with purple fringing. Blooming is a more serious concern with CCD devices, where can flow down a readout column creating a distinctive white streak, as seen here or on this NASA composite from Mars.
Blooming can be mitigated in CCDs through the use of an anti-blooming gate, which bleeds off excess charge. However, such structures do rob photosensitive area from each pixel. From reading Sony's CCD datasheets, it appears that the interline transfer area of interline transfer CCDs may be usable as a kind of antiblooming device during exposure. (There are references to sweeping the blooming signal before reading the exposure from the chip.) I'd welcome additional insights on this.
CMOS sensors are inherently resistant to blooming, in part because they are not designed to transfer charge from one pixel to the next, as is the case with CCDs.
This is a very difficult question without a simple answer. First, you should understand that there are two types of inks in use today, dye based inks and pigment based inks. Dye based inks are the most popular kind and they are used in the vast majority of printers from Epson, HP and Canon. (Some exceptions are Epson's Stylus C80 and Stylus 2000.) When dye based inks first became popular in photo-quality printers, they didn't have great longevity, and would fade in anywhere from 6 months to a couple of years if exposed to reasonably strong light. In response to this, Epson, HP and Canon developed new colorfast inks and papers. (Yes, you should pair the inks with the papers for best results.) These have good resistance to fading from light, 20 years or more. However, they may fade in as little as a matter of weeks when exposed to certain types of atmospheric gasses. This is why photos printed with dye based inks are best stored under glass.
There are several printers with pigment based inks on the market, such as Epson's Stylus C80 and Stylus 2000. There are some tricky technical issues involved in getting a wide color gamut from pigment based inks, but progress is being made. These inks promise to be more resistant to fading and could hold up for 75 years or more under less than ideal conditions.
Here are some sites where you can learn more about print longevity:
The difference can be quite large, both in terms of quality and longevity. On low quality paper, colors can bleed and images can fade in a matter of weeks or months.
Dye sublimation is a technique by which a waxy, solid dye is heated until it vaporizes (sublimation). The vaporized ink is ejected onto paper, where it recondenses and forms dots on the paper. The final print is covered with a coating to prevent fading.
Dye sublimation can produce a wide color gamut and can result in prints with excellent longevity. The main problem with dye sublimation is the cost of the consumables.
Proceed with caution. Often the really inexpensive printers will not use the same technology as the middle or top of the line printers. Another common trick is that printer manufacturers will design their least expensive printers to accept very tiny ink tanks or cartridges. The printer may seem less expensive at first, but you wind up paying more in the end because you are buying more ink cartridges. Check all of these questions before proceeding.
By convention, when we talk about pixels per inch (PPI), we're talking about the number of pixels in your image that are displayed per inch of display medium, e.g., monitor or paper. When we talk about dots per inch (DPI), we are typically talking about the number of individually controllable display elements that your display device can fit in an inch of display medium.
A key difference is that when we talk about DPI, we're typically talking about the capabilities of a display device, while when we talk about PPI, we're talking about how much information you have squeezed per unit of display medium. Any time we change PPI, we change the output size or magnification of our image, whereas changing DPI doesn't necessarily change the size or magnification of the image.
The whole PPI vs. DPI distinction gets a bit confusing in the case of monitors. Monitors, especially LCD monitors, are best described in terms the number of pixels they can display per inch, but people often talk about monitors in terms of DPI. They will say that a typical monitor displays between 70 and 100 DPI, meaning that it has between 70 and 100 monitor pixels per linear inch.
Note that PPI and DPI don't need to be matched in any way. You can send a 300 PPI image to 720 DPI printer and get fine results. Your print driver and printer firmware will figure out how arrange the dots to display your image in a satisfying way. Note that if you have a huge mismatch between your PPI and DPI, then you might consider other ways of resampling your image than what is done automatically by your printer.
Note that this is discussion of printer capabilities not a discussion of the PPI needed to provide enough information for a visually appealing print on a sufficiently capable printer. For a discussion of that topic, see How large can I print my digital photos?
The answer to this question will vary with the underlying printing technology. At close viewing distances, images will start to look acceptable as low as 70 DPI of continuous tone color, and improve through 300-600 DPI. At some point in this range, typical viewers will become unable see any improvement from further increases in DPI, but it will depend upon the person and viewing conditions.
Note that very few devices are continuous tone devices, or have so many discrete tones per pixel that can be thought of as continuous tone devices. Dye sublimation printers can be viewed as continuous tone in many cases, and can produce excellent results at just 300 DPI for this reason.
Inkjet technology uses a small number of base ink colors (typically between 3 and 9) to give us the appearance of continuous tone. The DPI rating of an inkjet printer tells us the number of ink droplets of one the base colors that can fit in a linear inch of paper. Those with a penchant for combinatorics, might estimate the number of colors needed to give the impression of continuous tone and then work backwards to determine the equivalent continuous tone DPI of an inkjet printer. However, it's difficult to do this in a straightforward way since manufacturers can use variable droplet sizes, and special ink and paper formulations that affect the way the inks blend as they land on the page.
In practice, you will likely be able to see some dot patterns in a 4-color 300 DPI system if you look closely. If you exceed this specification (either in terms of number of colors or DPI), you will start having trouble seeing the dot patterns. At some point along the path to 1000+ DPI with 4+ colors, you will start losing the ability to see dot patterns, but the precise point will depend upon the viewer and viewing conditions. (Of course, this assumes a well executed print system. You might see dot patterns in a printer rated at 1000+ DPI if it is poorly designed or executed.)
If you want to test your ability to detect fine details, compare laser printer output with FAX output. Laser printers are typically at least 300 DPI, and most people will notice slight improvements up to 600 DPI. Most FAX output is 200 DPI, and the difference is quite noticeable to most people.
Note that this is a discussion about PPI and not printer DPI.
For color photographic-type images, you will start to get decent looking results at around 150 PPI and you will probably notice improvements through 300 PPI. For most people, there will be some point in this range where improvements won't be noticeable, but it will vary from person to person and it will also depend upon the quality of your original image and printer.
How do you compute these numbers for your photos? It's very easy. You divide the number of pixels in a dimension by the number of inches in that dimension. For example, suppose you have a 1600x1200 (2 megapixel class) image, and you want to print so that the long dimension is 10 inches. You need to divide 1600 by 10, which yields 160 PPI. This means that on a good printer, you will get decent looking 8x10 prints from a 2 megapixel camera, but that you would probably notice some improvement if you moved up to 3 megapixels and might notice improvements through 8 megapixels.
You may be wondering why 200 DPI looks awful for text (FAXes), but 160 PPI is considered in the acceptable range for photos. There are several reasons: We look at text images in a different manner from the way we look at photos. We tend to hold them closer and stare more intensely at small areas, i.e., letters, for text. The second reason has to do with color and tone. Today's color printers are capable of producing very subtle changes in tone and darkness. This gives objects graceful edges that are pleasing to the eye. Unless it is anti-aliased, text cannot benefit from this and looks jagged at low DPI.
There are many options for this. While I haven't tried it yet myself, one of the more popular ones was Ofoto, which has now become Kodak's printing service. You can email (or otherwise upload) your images to these services and receive your prints by mail.
Other on-line options:
I've stopped adding to this list because I've become aware of a a great Guide To Online Photo Albums which contains detailed info on various sites and costs for printing photos through these sites. (See also Richard Ackerman's list.)
A few words of advice for people trying these sites: Before you spend a lot of money on prints, order a few samples first. Their printing equipment may be calibrated differently from yours, so you should check things out to make sure that you're getting what you expected. For very large enlargements, you may want to try printing a small crop at the same PPI as your final enlargement to get a feel for whether you image will look OK at the target size and resolution.
Finally, many local camera shops and film developing services are starting to offer digital print services. These will vary quite a bit in quality and sophistication, so the quality probably will not be as good as the best on-line sites. They'll also be more expensive. However, some will offer near-instant gratification:
This is a partial list to which I will hopefully be adding: