Jump to content

Talk:Dots per inch

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

DPI printout

[edit]

Some of this seems very subjective. —Preceding unsigned comment added by 68.225.92.194 (talk) 00:36, 22 August 2009 (UTC)[reply]

DPI Requirements

[edit]

It seems to me that a section on DPI requirements is sorely lacking in this article. Unfortunately, I don't know where to find the physiological data references to back up the claims in the following posting by Kelly Flanigan, which seems to be the most rational explanation I've found. Unfortunately, it's not a reputable enough source. —Preceding unsigned comment added by Jaxelrod (talkcontribs) 15:01, 17 October 2008 (UTC)[reply]

+1 CannibalSmith (talk) 09:22, 28 December 2009 (UTC)[reply]

British vs. American spelling

[edit]

Since inches are now only an American unit* the spellings on this page should be in American English. Similarly, a page on the British monarch or Australian government would use British spellings, and a neutral article (i.e., one on molecular biology) could use either or both spellings. SteveSims 04:39, 5 January 2007 (UTC)[reply]

*Excluding a few non-English speaking countries.

I asure you, the inch is still widely used in the UK. A page on the BRITISH monarch or AUSTRALIAN government is a different matter, because its actualy about a particular country, Dots per inch is universal. -OOPSIE- 14:07, 26 August 2007 (UTC)[reply]
OOPSIE, if the "inch" has no strong tie to any particular English-speaking country, then the spelling used should be the one initially used. The first version of the article used AE spelling ("Color images need...."). This should be corrected. JamesMLane t c 01:33, 14 April 2013 (UTC)[reply]
Indeed, it got flipped in 2006 in this uncommented edit. Let's put it back. Dicklyon (talk) 02:03, 14 April 2013 (UTC)[reply]

Merges

[edit]

I hadn't noticed before that a separate article existed for DPI until Bobblewik helpfully pointed it out. I have arrogantly decided to simply redirect DPI to this article, wiping out the previous contents of that article. Here is why:

  • In a general sense, there was no information in that article that is not already in this one.
  • Several factoids from that article are either vague or misleading, and are more adequately explained in this article or in related articles Pixels per inch and samples per inch. Specifically:
    • DPI always refers to a physical representation of pixels per length unit. - Not true. DPI most correctly refers to printing resolution - dabs of ink on paper. The term is often used to specify what would more accurately be called pixels per inch; this distinction is explained in the current article.
    • Only when outputting this image to a physical medium with a certain size (say printing on to paper 20cm by 15cm) does the DPI get defined. - This is confusing, subtly wrong, and partially contradictory to the previously-mentioned sentence that DPI always refers to pixels.
    • The resulting DPI depend both on the resolution of the image... -- Not true. Even if DPI is used broadly, encompassing printer and monitor output, it still refers to a physical characteristic of an output device. The resolution of a particular image displayed on that output device has no bearing on that device's DPI capability. And DPI doesn't really make any sense in describing an image with such-and-such number of pixels; the "DPI" of an image only makes sense when given some number of inches. You can print a 5x5-pixel image on a 1200dpi printer; the output is 1200dpi. Printing a 1000x1000-pixel image does not change that.

Of course, the history is preserved, if anyone wants to extract any useful nuggets of information from it. -- Wapcaplet 20:57, 12 Aug 2004 (UTC)

Removal

[edit]

From an editor: Draw a 1-inch black line on a sheet of paper and scan it. If the resulting image shows a black line with a width of, say, 300 pixels, then does the scanner not capture at 300 PPI? 128.83.144.239

I moved the above anon comment from the article to here. It was posted by way of explaining the setence "A digital image captured by a scanner or digital camera has no inherent "DPI" resolution until it comes time to print the image..." I'm not sure how it helps to explain this point, however; indeed, it seems to stem from the very misunderstanding of DPI that is being explained (that DPI and PPI are not the same thing, and SPI is yet another thing entirely). -- Wapcaplet 23:42, 23 Sep 2004 (UTC)

DPI versus PPI

[edit]

DPI is mostly used to to tell how big a resolution an image should be printed at. Of course, this should be really be "pixels per inch"! But since it has become a standard, I think it should be explained here.--Kasper Hviid 18:33, 9 Nov 2004 (UTC)

  • I am not sure what you mean; the details of what it means to print an image at a certain DPI are, I think, fairly well-explained in the article at present. DPI is all about printing; there is a separate article on the related but different term pixels per inch. In printing an image, three things influence the output quality: DPI (the physical capability of the printer), the number of pixels in the image, and the space in which it is to be printed. As far as I know, there is no term to describe "the number of pixels printed in a one-inch space on the paper," though pixels per inch is probably the most appropriate. The resolution of an image sent to the printer (that is, the number of pixels) is unrelated to DPI of the printer. Maybe this should be explained better in the article? -- Wapcaplet 00:12, 10 Nov 2004 (UTC)
  • I has always throught of DPI as "How many pixels should be printed per inch"? This is a wrong, but common use of the word. As you said, there is no term to refer to "the number of pixels printed in a one-inch space on the paper", but this is probably why the word dpi has been used instead. For instance, at www.lexmark.com they tell that "resolution is measured in dpi (dots per inch) which is the number of pixels a device can fit into an inch of space." And at www.olympus-europa.com, they tell that a 640 x 480 pixels image at 150 dpi will end up as 10.84 x 8.13 cm in the print. Since this has become a commonly used standard, it deserves to be accepted in the article as a official use of the word dpi, along with a note that this really is wrong use of the word. The "pixels per inch" article dont tell anything about pixels per inch in print, but only about the screens resolution, something I have never understood the point of. Kasper Hviid 09:54, 10 Nov 2004 (UTC)

I wasn't able to find the references you gave on lexmark.com or olympus-europa.com, but it doesn't surprise me that DPI would be used in this broad way in documentation intended for the general consumer. I've seen scanner software that uses DPI to mean samples per inch. "Dots" is a fairly general term for most people; pixels, color samples, and ink spots could all be "dots" to most people. I don't think there's any call for distinguishing an "official" use of the word. It's official if people use it that way, and (I suppose) if it's defined that way in the dictionary, which it is. It's really only the more technical among us who care to differentiate DPI from PPI and SPI.

As for the purpose of describing screen resolution, I suppose it's probably useful in calibrating a computer display to a printing device. If a print shop needs to have things displayed on their computer monitor at the same size they will be printed, monitor PPI is useful. I thought about adding to the pixels per inch article to include the idea of pixel density on paper, but while it makes sense to me, I know of no other instances of PPI being used in that way. If you find such a usage, let me know! -- Wapcaplet 22:59, 10 Nov 2004 (UTC)

Since pixels are actually dots (PDF), it comes out that PPI and DPI are actually the same thing. --88.153.32.35 12:50, 24 June 2006 (UTC)[reply]

---

Yes, since dpi and ppi are obviously used interchangeably, this interchangeability should be explained here... so that novices looking here for definitions can understand what they find to read in the real world. It seems extremely arrogant to say "Wrong" and "misuse" when obviously pixels per inch has always been called dpi. And still is.

Do some few say ppi? Yes. Do the vast majority say dpi. Absolutely yes.

1. All scanner ratings are specified as dpi, obviously meaning pixels per inch. They dont say "samples per inch", they all say dpi, which we all know means pixels per inch. Scanners create pixels, not ink dots. Who are you to call every scanner manufacturer wrong?

2. All continuous tone printers (dye subs, Fuji Frontier class, etc) print pixels, and call their ratings dpi too (colored dots also called pixels). Who are you to call all these manufacturers wrong?

3. The most current JPG image file format specification claims to store image resolution in "dots per inch". The most current TIF file format specification claims to store image resolution in "dots per inch" They are referring to pixels... there are no ink dots in image files. Who are you to call these authors of the most common file format specifications wrong?

http://www.w3.org/Graphics/JPEG/jfif3.pdf (page 5)

http://partners.adobe.com/public/developer/en/tiff/TIFF6.pdf (page 38)

4. Google searches on 7/14/06 for

"72 dpi" 17,200,000 links

"72 ppi" 124,000 links

(138 times greater use of dpi... a couple of magnitudes more useage)

You may be aware that 72 dpi topics are never about printer ink dots.

When calling everyone else wrong, a wise man would reevaluate his own position. The Wikipedia author who claims misuse of dpi is obviously dead wrong. It is probably only his wishful thinking that the world OUGHT to be as he wishes it to be, but it is just his imagination, and this Wiki definition is definitely WRONG.

The two terms are obviously interchangeable. Wake up, look around, where have you been? Pixels per inch has ALWAYS been dpi. Yes, dpi does also have another use. So what? Almost every English word has multiple meanings and uses. However which term is best is not important here - this is certainly not the place to decree it (as attempted). Both terms are obviously used with the same meaning (pixels per inch) and that matter is long settled. Say it yourself whichever way you prefer to say it, but we obviously must understand it both ways. Because we see it everywhere both ways. So this both-ways phenomena needs to be explained in the definitions here. Without bias. About how the real world really is, not about how some author might dream it ought to be.

WHAT IS IMPORTANT is that beginners need to know the two terms are used interchangeably everywhere, with both terms meaning pixels per inch, simply so they can understand most of what they will find to read about the subject of imaging. There is no reason to confuse them even more by telling them everything they read is wrong. Wiki is wrong. The Wiki definition can only totally confuse them.

Beginners do need to know the two concept differences (your two definitions), but once the concepts are known, then the terms are almost arbitrary.. We could call them "thingies per inch". The context determines what it means (like all English words), and if the context is about images, dpi can only mean pixels per inch (ppi can mean that too). If the context is about printer ratings, then dpi can only mean ink dots per inch. 71.240.166.27 03:20, 14 July 2006 (UTC)[reply]


DPI is the CORRECT term for the target resolution at which an image is to be printed or displayed. It is a value stored in a digital file which indicates the current target printing resolution of that file. To use it otherwise is to sow confusion out of some misguided ideology. —Preceding unsigned comment added by 24.128.156.64 (talk) 18:19, 10 October 2007 (UTC)[reply]

Printer advertisements

[edit]

I see several printers advertised with "4800 x 1200 color dpi" and such. Is this some kind of industry conspiracy to redefine the term "dpi"? Or am I misunderstanding something? Example: [1] -- Anon

  • Nope, sounds like the most appropriate and correct possible usage of DPI to me, assuming those figures are what the printer is actually capable of. Now, if a scanner is advertised with some DPI, then Samples per inch is what is actually meant. Many times a scanner is advertised with its interpolated sampling resolution, since that number is often much higher than the actual optical resolution; a good consumer scanner may only be able to capture 1600 samples per inch, but the samples are often scaled (either in the hardware or in the scanning software) to much higher resolution, such as 19,200, and of course "19,200 DPI" looks better in an advertisement than "1600 DPI." Whether printer manufacturers use a similar strategy, I don't know; I do know that the reason the two DPI figures are often different is that one of them is a horizontal resolution, determined by how finely the printing heads can be controlled, while the other is a vertical resolution, determined by how finely the paper feed roller can be controlled. Finally, if you see a digital camera advertised with some DPI, buy a different brand, since DPI has no meaning in that context unless they are referring to the quality that might be achieved in printing a digital photo at a certain size (and then, pixels per inch is probably more appropriate). -- Wapcaplet 18:11, 28 Nov 2004 (UTC)
Yes, it's a mixture of morons being stupid and marketing people trying to pull the wool over peoples' eyes. They're conflating dots per inch with dots themselves, image size (48x12 kpixels across) with spatial image resolution (note that the "DPI" in the prior figure does NOT contain actual inches or any other real-world spatial measurement, dead givaway of moronity.) I'll put this back in the article. 76.126.134.152 (talk) 11:47, 2 June 2008 (UTC)[reply]

^ I agree with the last poster. What were you guys even talking about? Grab a ruler, and a microscope, print something on one of those printers at "4800x1200 dpi", and count how many ink dots are placed into each horizontal and vertical inch of paper. "DPI" has referred to what spatial resolution a printer is capable of (easily provable, at least on older devices, by making a complex 900x900 pixel image, printing at maximum quality, and checking how big it came out and whether any detail was missing) for as long as I can remember being involved in the computer scene in any way, which goes back a good 24 years (to early 1990). For example, some old 9-pin Kyrocea dot matrix was apparently capable of 240x160 dpi... Chunky, but better than the 216x144 of some cheap rivals. What exactly are you trying to disprove, here? 193.63.174.211 (talk) 11:00, 19 February 2014 (UTC)[reply]

Color

[edit]

I am concerned about the following statement:

This is due to the limited range of colors typically available on a printer: most color printers use only four colors of ink, while a video monitor can often produce several million colors. Each dot on a printer can be one of only four colors, while each pixel on a video monitor can be one of several million colors; printers must produce additional colors through a halftone or dithering process.

Computer displays work in a similar fashion to printers: they use a combination of different amounts of the primary colors (in this case, the additive primaries: red, green, and blue) to produce a wide range of visible colors. Most printers use the (subtractive) primaries and black in different combinations and patterns.—Kbolino 02:18, 10 February 2006 (UTC)[reply]

No they don't. Video displays can actually produce darker or lighter versions of the same color in each of their subpixels by altering the amount of light produced. Printers, on the other hand, can not blend their ink with the color of the paper (I.E.: white) to produce darker or lighter shades/tones due to the ink's nontransparency. Instead, they must place smaller blobs of a given color of ink in order to (on white paper) make a less saturated tone or bigger blobs to make a more saturated shade. Admittedly, some printers actually CAN change the color of their ink by mixing all 4/5/6/8 colors together into one big blob, like the solid ink printers I've drooled over for years. 76.126.134.152 (talk) 11:47, 2 June 2008 (UTC)[reply]

What is DPI dependent on?

[edit]

"The DPI measurement of a printer is dependent upon several factors, including the method by which ink is applied, the quality of the printer components, and the quality of the ink and paper used."

This is not true, or at least is confusing.

The DPI in the printing direction is dependent on the head firing frequency and the linear print speed. The DPI in the advance direction (perpendicular to the printing direction) is dependent on the spacing of actuators (e.g. nozzles for inkjet) on a head, and the angle of the heads. Each of these can be multiplied by use of interleaving/"weaving" using multiple passes and/or multiple heads.

What the sentence above may have been trying to get at is that different print modes can use different firing frequencies, linear speeds, interleaving factors, etc., and the effect of ink and media settings in print drivers is often to change the print mode (possibly in addition to other software settings that don't affect DPI). Also, different print head technologies may improve at different rates in terms of firing frequency, actuator spacing, etc.

On the subject of advertisements, I strongly suspect that some of the dpi figures quoted in printer adverts are inflated. You can inflate a dpi figure by:

  • counting different colors as more than one dot (this may be what "4800 x 1200 color dpi" means -- I expect it is really 1200 x 1200 in 4 colors)
  • counting each dot printed by a variable-dot head as more than one dot
  • saying "equivalent dpi" and making up a random number.

This kind of creative arithmetic is all the result of trying to munge various resolution and quality factors into a single number for marketing purposes. It's similar to how clock speed used to be used to indicate how "fast" a processor was. At its worst, it can lead to distorted technical decisions that maximize DPI with no improvement in, or even at the expense of quality (just as the Pentium 4 design was distorted to maximize clock speed).

It's unlikely that someone could see a visible improvement in resolution above about 1000 dpi with the unaided eye at a normal reading distance. The extra quality that you can get from a higher dpi than that is not due to an increase in resolution; it's due to a reduction in "graininess" (and possibly better hiding of head defects) from using smaller drop volumes, which requires you to use more dots in a given space to achieve the same ink density.

To meaningfully compare printers, you need at the very least to know the volume of ink in a drop (for inkjet heads as of 2006, this can vary from about 1 to 80 picolitres), including what "subdrop" volumes are possible in the case of variable-dot heads, as well as the real dpi figure in each direction. The overall quality will also depend on halftoning algorithms, the gamut of the inks used, color management, positioning accuracy of the printer mechanism and any encoders, head defects and how well the print mode hides them, etc. The intended application is also significant: to give an extreme example, there's no point in achieving "photographic" resolution in a printer that will be used to print billboards -- although color gamut would still be very important for the latter.

DavidHopwood 00:16, 5 June 2006 (UTC) (working for, but not speaking for, a printer manufacturer)[reply]


Metric

[edit]

"There are some ongoing efforts to abandon the dpi in favor of the dot size given in micrometres (µm). This is however hindered by leading companies located in the USA, one of the few remaining countries to not use the metric system exclusively."

I wouldn't blame US companies for this, even though I'm an enthusiastic S.I. advocate. Software interfaces to RIP packages and driver APIs require dpi, and there's no compelling reason to change them. Despite this, it is possible for a printer controller implementation to be internally almost S.I.-only. DavidHopwood 01:20, 5 June 2006 (UTC)[reply]

DPI or dpi

[edit]

This is pretty trivial, but should the case (DPI or dpi) be standardized in this article? The last section uses dpi while the others use DPI. --MatthewBChambers 09:13, 2 October 2007 (UTC)[reply]

It should use DPI. It's an abbreviation. I've fixed it. --jacobolus (t) 10:55, 2 October 2007 (UTC)[reply]
[edit]

Both of the external links are low quality links to pages by writers with an ideological axe to grind and a limitied understanding of the topic. —Preceding unsigned comment added by 24.128.156.64 (talkcontribs)

So be WP:BOLD, add some better sources! --jacobolus (t) 04:29, 13 October 2007 (UTC)[reply]
I disagree, the "Myth" link was the first place where I have understood why and how to reset dpi without losing quality. RomaC (talk) 14:42, 31 March 2008 (UTC)[reply]

DPI for digital images “Meaningless”?

[edit]

This sentence "Therefore it is meaningless to say that a digitally stored image has a resolution of 72 DPI." is just simply, clearly unequivocally false. It is also misleading in a way that exacerbates existing confusion among users. —Preceding unsigned comment added by 24.128.156.64 (talk) 16:44, 12 October 2007 (UTC)[reply]

Hmm, there seems to be some disagreement here about the origin of the term DPI. The position of the article is that it has its origins in printers, while 24.128.156.64 says that it had its origins in digital file formats. Does anyone have a reference to support either position? Personally, I think it's the printers. Rocketmagnet 17:10, 12 October 2007 (UTC)[reply]

I don't think it is only an issue of origin. It is, most importantly, an issue an issue of use. Users of graphics software get confused on this issue as it is. To deny the fact that all professional graphics software and most amateur graphics software allows the editing of a value called DPI which gets stored with the file adds to that confusion. Here is an example of a page using DPI correctly: http://msdn2.microsoft.com/en-us/library/ms838191.aspx—Preceding unsigned comment added by 24.128.156.64 (talk) 17:23, 12 October 2007 (UTC)[reply]

I don't think anyone was denying that DPI can be stored in a digital image. And you're right, people do get confused about it all the time. People often ring me up wanting a picture, and they say "I want a picture, and I need it at 300 dpi" And I say, "Well, it depends how big you're going to print it." And they say "I don't know, just give it to me at 300 dpi". Rocketmagnet 17:59, 12 October 2007 (UTC)[reply]
Which is precisely the problem. People don't seem to realise that a digital image doesn't fundamentally have DPI, in the same way that it fundamentally has a resolution. The DPI is a tag that's added on by some software, that can be used or ignored as the user sees fit. Rocketmagnet 17:59, 12 October 2007 (UTC)[reply]
I'm having trouble reconciling "Therefore it is meaningless to say that a digitally stored image has a resolution of 72 DPI." with "I don't think anyone was denying that DPI can be stored in a digital image."
On the other hand it is now clear that we're both motivated by trying to correct the same confusion among our clients (and other people in similar positions) and disagreeing about how to do that. —Preceding unsigned comment added by 24.128.156.64 (talk) 18:43, 12 October 2007 (UTC)[reply]
I'm sure we agree that a digital image fundamentally has a resolution, since it is literally made out of pixels. It is impossible to change the resolution without fundamentally changing the content of the image. Now, a digital image might also have a filename. But the filename is not fundamental to the image. I can rename the file to be whatever I want, and it does nothing whatsoever to the content of the image. It could even have no filename if I haven't saved it yet. So I would say that digital images don't fundamentally have a filename. I would say the same thing of the DPI value in the image. The image might have a DPI value stored in it, but digital images in general do not fundamentally have DPI. I could change the DPI value to be whatever I want, and it will do nothing to the content of the image.
We can't all agree, digital images fundamentally have NO resolution, digital images are resolution independent, think of it this way you have an image that is 1920 x 1080 pixels on an 8" HD tablet, you could say that it looked like a high resolution image, you couldn't say the same on an 84" TV, yet it's the same image, or think of it this way, can something that has no size have a resolution. — Preceding unsigned comment added by 80.2.149.91 (talk) 16:46, 13 January 2015 (UTC)[reply]
Another example: I could e-mail a photo of a cat to a local print shop. Then call them up and say "I know the image says it's 300 dpi, but I want it printed at a width of 2 feet". That makes sense, because the DPI is not fundamental to the image. The guy at the print shop probably wouldn't even bother editing the dpi value in the image, he would set the printer to print it to the size I want. Compare it with this example: I e-mail a photo of a cat to a local print shop, then call them up and say "I know it's a photo of a cat, but please print me a photo of a house". That would be insane.
Yet another example: I add a tag to my digital image which says "50 gsm" (gsm = grams per square meter). The idea is that, when the image is printed I want it printed on that weight of paper. But is it meaningful to say that this digital image has a weight of 50 grams per square meter? No. That's nonsense. Digital images do not have weight (even if there is a GSM tag in the image). In the same way, they do not really have DPI (even if there is a DPI tag in the image). Rocketmagnet 20:23, 12 October 2007 (UTC)[reply]
Er... any Wikipedia entry which said "Therefore it is meaningless to say that computer files have names." would get corrected.
Please do not misquote me. I did not say that computer files don't have names. I said that digital images do not fundamentally have filenames. A filename is not a fundamental part of what it means to be a digital image. For example, when I play Quake, I am seeing tens of digital images per second. Not a single one of those has a filename. Nor do any of them have a DPI.
The DPI in the file DOES effect the printed output unless it is changed or overridden. It is like the icc color profile in that regard. Would you say that it is meaningless to say that graphic image files in many formats can have color profiles? —Preceding unsigned comment added by 24.128.156.64 (talk) 21:58, 12 October 2007 (UTC)[reply]
Oh god. I think we're talking at cross purposes. Do you understand the difference between "some humans wear glasses" with "glasses are not a fundamental part of what it means to be human"? Glasses can be added to humans, and it may benefit them, but there are many humans without them, and they are still human. Likewise, a DPI value can be added to an image, and it may help people when printing the image, but many images have no DPI value added to them, and it makes them no less an image.
I think you are confusing "a digital image" with "a file on a disk which contains a digital image", which are two different things. A digital image is a much broader concept. Rocketmagnet 14:36, 13 October 2007 (UTC)[reply]
The DPI value is a part of a digital image in the same way that the color profile is; in the same way that vector data or text can be added to a bitmap in a photoshop image and become part of the digital image. This is the same way in which a file name is part of a computer file. I may have been a bit confusing in that a file name is not a part of a digital image in quite that way. The example of a file name being part of a file was a metaphor and may not have made my point clearer.
A digital image is composed of more than just a bitmap, it can include color profiles, DPI, vector data, text, positional offsets, filters, information specifying display of a variety of devices, compression specifications, and other elements.
To use your metaphor, I would object to an article that said "It is meaningless to say that a person has glasses." A person can have glasses. It would be very confusing to people to say that a person cannot have glasses if it wasn't a subject everyone is so familiar with. —Preceding unsigned comment added by 24.128.156.64 (talk) 16:20, 13 October 2007 (UTC)[reply]
Perhaps we are having a problem over the meaning of "has". You could say that a human has glasses. But this would be a different meaning of "has" to its use in: "a human has DNA". In the latter example, having DNA is fundamental to what it means to be a human (all humans have DNA). In the former example, we are talking about one example of a human, not all humans. The same applies to DPI. An image may or may not have a DPI value tagged onto it.
Look, Imagine if I could tag an image with "5 ounces", then would you say that the image has a weight? Really, does it weigh 5 ounces? No, digital images don't have weight. It would be still correct to say that "it is meaningless to say that a digital image has a weight".
Tagging an image with "100 dpi" doesn't mean that it really has one hundred dots per inch. A digital image cannot actually have one hundred dots per inch, because it doesn't have a size in inches. It only has inches if you actually print it out. Surely you can see that there is a difference here? Surely? Rocketmagnet 17:30, 13 October 2007 (UTC)[reply]
It's apparent that the word "meaningless" is insufficiently clear or direct; the result we are currently witnessing is an unproductive semantic squabble. You're both essentially correct, so instead of arguing, how about we try to come up with an alternative phrasing which all can be satisfied with? --jacobolus (t) 18:00, 13 October 2007 (UTC)[reply]
As an unrelated aside, 24.128.156.64, you might try signing your comments, like this: ~~~~. :) --jacobolus (t) 18:07, 13 October 2007 (UTC)[reply]
Thanks jacobolus, wise words. I'd considered re-writing the text in the article, but I thought it would be worth coming to an understanding, in case it caused an edit war. But, yes, this discussion doesn't seem to be getting anywhere. However, it does point strongly to the misunderstanding people seem to have with the concept of DPI relating to digital images. Rocketmagnet 18:26, 13 October 2007 (UTC)[reply]
If you think re-writing will cause an edit war, then put the proposed rewrite on the talk page first. :) --jacobolus (t) 20:14, 13 October 2007 (UTC)[reply]
Here's some proposed text. Perhaps a bit clunky, but I think its correct:
DPI refers to the physical size of an image when it is reproduced as a real physical entity, for example printed onto paper, or displayed on a monitor. A digitally stored image has no inherent physical dimensions, measured in inches or centimeters. Some digital file formats record a DPI value, which is to be used when printing the image. This number lets the printer know the intended size of the image, or in the case of scanned images, the size of the original scanned object. For example, a bitmap image may measure 1000×1000 pixels, a resolution of one megapixel. If it is labeled as 250 DPI, that is an instruction to the printer to print it at a size of 4×4 inches. Changing the DPI to 100 in an image editing program would tell the printer print it at a size of 10×10 inches. However, changing the DPI value would not change the size of the image in pixels which would still be 1000×1000. An image may also be resampled to change the number of pixels and therefore the size or resolution of the image, but this is quite different from simply setting a new DPI for the file.

24.128.156.64 22:45, 13 October 2007 (UTC)[reply]

I think that's pretty good. I've made a couple of small changes though. Rocketmagnet 22:48, 13 October 2007 (UTC)[reply]
Great. I took your version, changed "would tell the printer print it" to "would tell the printer to print it", put back in the page references that where in the original on the page, and put it into the article. 24.128.156.64 23:12, 13 October 2007 (UTC)[reply]

just my 2 cents on the words on top: the number of pixels is an hard coded value in any pixel format. the dpi value is sort of a scratch parameter in common image formats. people doing layouts and then prints have first to decide what width and height (measured in cm or inch) they want to select for the given image. only after that decision you can do the math and divide the number of pixels by the number of inches... some folks prefer talking in the news paper typical columns, e.g. 3 columns width - and thats a fixed length for a specific type of paper. --Alexander.stohr (talk) 16:18, 5 August 2010 (UTC)[reply]

Specification jargon

[edit]

Can editors who understand these things (I don't) add explanations to the article that make the kind of jargon typically found in printer specs understandable to laypeople?

Examples (taken from HP and Brother):

  • "Up to 1200 rendered dpi black"
    What is the meaning of "up to" here?
    "Rendered" as opposed to unrendered and thus invisible?
  • "Up to 4800 x 1200 optimised dpi colour"
    "What does X times Y mean here? 4800 x 1200 = 5760000, so is this 5,760,000 dpi?
    "Optimised"?
  • "1200 input dpi"
    What does input have to do with the specs of the printing system?
  • "Optical Resolution Up to 600 x 2,400 dpi"
    This refers to scanning; again the mysterious multiplication.

 --Lambiam 08:43, 24 April 2009 (UTC)[reply]

I think that "printer specs understandable to laypeople" is outside the realm of possibility, and should not be attempted; certainly not without a reliable source; otherwise we'll have a rathole. Dicklyon (talk) 15:32, 24 April 2009 (UTC)[reply]

--ADDED QUERY (2009-04-30): At the start of this article, whose length I've yet to scrutinize, comes an example that sadly I find far from enlightening--to wit:

| An example of misuse would be if an LCD monitor manufacturer claimed that | a 320x240 pixel 3" monitor (2.4"x1.8") actually had a resolution of 400 DPI, | (three times the pixels per inch).

 [NB:  delete this last comma--NOT wanted before parenthesis, delimited by parens.]

PLEASE "show your work": i.e., what is being multiplied/divided-into what? E.g., I multiply 320x240 = 76_800, and 2.4x1.8 = 4.32 and think that density should result from 76_800 / 4.32 (= 17_777.77...), but that's not close to 400! ?? Dividing 76_800 by "400" gets me a 192 which I can't figure how to map ... . .:. It would be helpful, at this introductory point, to show the calculation! Thanks. (-; 216.194.229.45 (talk) 13:18, 30 April 2009 (UTC)[reply]

pc programs

[edit]

quote from the article: Software programs render images to the virtual screen and then the operating system renders the virtual screen onto the physical screen; with a logical PPI of 96 PPI, older programs can still run properly regardless of the PPI provided by the physical screen. Useability and readability is heavily influenced by the technology (laser beamer, flat screen with/without contrast enhancements, cathode ray tube), by the viewers distance, the viewers individual vision capabilities and by some "crap" the operating system does, e.g. anti-aliasing fonts. even environment (lit, dark, foggy, reflections, ...) might play a big role every now and then. 99.5% of all computer progams will run on any screen with any PPI value - they just dont care about it. there are pretty few programs that need to fullfill any exact measures but rather sometimes the application offers a setup e.g. for the used font in editors and terminals. furthermore modern operating system offers lots of tuning for e.g. border widths, menu fonts, window decoration and so on. if someone wants to use an 8x8 font he can, but he can use an 14x16 font as well - the user is adapting to what he wants to see. if you connect a 12" tube or a 40" flat screen the operating system will rather respond with a desktop having more or less width than using the PPI value for adjusting to the changed conditions. BTW for much of legacy applications (e.g. a window of a C64 emulator) there are even zoom modes and nearly all modern consumer flat screens have even built in zoom even if folks like to use such devices in a 1:1 pixel match mode - forget about PPI and fix that statement in the article. --Alexander.stohr (talk) 16:27, 5 August 2010 (UTC)[reply]

I can't figure out what exactly you are trying to say. Are you suggesting that the article text should be changed? –jacobolus (t) 21:00, 5 August 2010 (UTC)[reply]

"dot pitch" or "dot trio pitch"

[edit]

Are the common monitor "pitches" given in terms of "dot pitch" or "dot trio pitch"? Mfwitten (talk) 22:22, 1 October 2011 (UTC)[reply]

Trying to summarise

[edit]

Just my luck to stumble, when looking for some clarification, on this article - one of those where the discussion is much longer then the article itself :-) There are lots of sentences in the article (and in the discussion) which I don't understand; others which seem confusing. But then again, I am far from a specialist in this field. Yet, maybe the following is helpful, if only to bring out the points of disagreement.

1. I think that in this article (and in the discussion) two objectives are intertwined, leading to confusion. I.m.o. an encyclopedia should a) describe the (various) common uses of a term; b) offer explanations and background knowledge. Therefore, I wholeheartedly agree that terms like 'wrong' or 'misuse of the word' or even 'misleading' should be avoided. However, an encyclopedia is not a dictionary; it should go one step further and EXPLAIN certain things. In fact, the various 'definitions' of dpi and ppi cannot even be comprehended without some background knowledge, and so would by themselves be of no use to any reader. Any explanation requires a strict definition of the terms used. Quite apart from any claim to 'correctness', when the explanation does not make a clear choice of words it becomes incomprehensible (which, i.m.o., it is right now). Both requirements need not be contradictory; one could very well describe the various ways in which a term is used, and nevertheless, when it comes to an explanation, use the terms in one specific well-defined sense.

2. So, what 'definitions' of DPI and PPI and other terms will the explanation use? In the discussion, it is not at all clear to me when the disagreement is about the use of a word and when it is about facts. Surely, these two kinds of disagreements should be separated as well as possible. In my opinion, following a historical line may be the most helpful to further comprehension.

- As far as I know the term DPI was first used for digital phototypesetting machines, like the Digiset (VideoComp in the USA), introduced in 1966. (I'm not sure about this, it would require verification) In the '80's, a digitital phototypesetter could achieve resolutions up to 3000 dpi. (or was it 4000?) DPI, then, describes the number of circular black dots, of varying size up till completely overlapping, per inch of paper. (The density of coloured dots was, at the time, described by different units.) When, for a colour printer, just 1 figure is given for the "DPI", it pertains to the number of black dots per inch and for a very good reason: to make it comparable to the DPI of a bl/w laserprinter, still widely in use.

I would propose to use the term "DPI", in the explanatory part of the article, in this sense only.

When two figures are given, like in "1200 x 4800 color dpi", I agree with Wapcaplet that the figures refer to black dots (used to print text and comparable to the figure for bl/w printers) and 4-colour dots (to print coloured pictures) respectively. This use of 'DPI' seems to me a quite reasonable 'extension' of the term DPI, to adapt it to the colour-print-era. (And frankly, I don't understand the objections raised by 76.126.134.152) The question remains, however, what to make of printer specifications where the second number does not equal 4x the first number (or vice versa, since there seems to be no rule on the order of the two numbers). Does anyone know what is meant by a ¨printer specification like "1200x2400 dpi"? I suspect it means that text is printed at 1200 dpi and colour pictures are printed at 600 dpi (that is: 600 dots of one colour per inch). Finally, I'd like to know what the 'x' between the two figures means; it suggests something like area, like 'horizontal and vertical'. If what I suppose here is true however, it has no meaning whatsoever and could just as well have been a dot or a comma or a semi-colon. This should be pointed out to the reader.

3. The term 'pixel' (and the related 'PPI') orginates in an entirely different field: the 'digitisation' of pictures. To make up a digital picture from an image, the image is overlayed with a grid of (theoretically) squares and for each square an everage colour and luminosity is determined, either by a scanner or a camera. The resulting 'bitmap', containing values for colour and luminosity of each pixel, is then saved in a picture file. I would propose to use the term 'pixel' in the explanatory part of the article, in this sense only.

Many file-formats (but I don't know which ones exactly) give the option of writing a value for PPI in the meta-section of the file, thus making it possible to determine the real size of the scanned original. Evidently, this value has no meaning for a picture of a landscape taken with a digital camera. (To my knowledge, camera's do not save this value when saving in, for instance, jpeg, but most scanning software does fill this value.)

4. The term PPI is, I think, not as clear as DPI. For scanners, the meaning would seem quite clear to me: it describes the size of the grid used to scan the picture. PPI, used in that sense, is a useful measure for the 'resolution' of a scanner. In practice, however, I rarely see PPI in the specs of scanners; manufacturers seem to prefer the more widely known term 'DPI' instead, and it is, I think, anybody's guess what they mean by that. Some may simply use 'DPI' when they mean 'PPI' (thus giving useful information on the quality of the scanner); others may use 'DPI' to simply refer to the measurements of the scanned picture as layed down in the meta-section of the resulting JPG or TIFF file, which has nothing whatsoever to do with the quality of the scan. (In practice, I met both)

For camera's, the use of the term 'PPI' seems less common, and for good reason: it is not clear what it refers to. When used, however, PPI does indeed seem to refer to the size of the sensor: given a total nr. of pixels, the bigger the chip (and thus: the lower the PPI value), the better the quality.

I would propose to use the term PPI, in the explanatory section of the article, primarily to denote the resolution of a scanner. (The meaning when used in reference to camera's is somewhat confusing, as a lower figure is associated with better quality, which is quite the reverse from it's meaning when used with respect to scanners.)

5. - The concept of dpi could conceivably be extended to monitors, as these, too, are output-devices and the original CRT's used coloured dots. However, the meaning is not as clear, as a monitor does not produce 'black dots'. The term 'dot-pitch' has always been more popular for monitors. (Although I don't know whether this referred to the distance between individual dots, or between the dots of one colour, or between groups of 3 coloured dots.) In fact, most modern lcd monitors do not produce dots at all; instead, they work with squares built-up from 3 coloured stripes, usually denoted as 'pixels'. (I'd propose to use the term screen-pixels to avoid confusion.) Most often, the resolution is described as "X*Y pixels", while the physical size is described by the length of a diagonal across the screen.

Thus, for most people the terms DPI and PPI in connection with a monitor doesn't add much, except confusion. In graphic design, though, you may want to make the size on-screen exactly the same as the size in print. In practice, you need to know the PPI of your screen to achieve this. One can calculate it as described in the article or simply take the vertical no. of screen-pixels and divide it by the screen-height in inches.

5. Some remarks

- Digital printers produce dots in a certain colour. These dots can vary in size, and are thus arranged that they they can overlap, ultimately, when fully overlapping, producing black. (I'd put an asterisk here, as this is very subtle stuff, concerning the way our eyes perceive colours etc., but I think this description may do in this context.) Screens, on the other hand, from the CRT'S of old to modern LCD or plasma, do NOT vary the dot-size, but they CAN vary the lumininosity of the dots (or stripes or whatever). The dots can be circular, but some printers can produce elliptical or even semi-square dots; the printing-software uses this option to produce the best-looking output. I quote from http://www.prepressure.com/printing-dictionary/d "The dot shape is varied to minimize the dot gain at the point where dots join one another. Elliptical dots minimize the sudden dot gain where corners of dots connect; they may connect in their short direction at 40% dot area and in their long direction at 60% dot area. Round dots, often used for newsprint, may not connect until 70% dot area." (Here, I'm out of my depth again: hopefully someone knows more about this. Although, on the other hand, it doesn't seem very relevant for this article.)

- It should be noted (and I sorely miss this in the article) that the term DPI when referring to dots of one colour (notably black) is still highly relevant when used to describe a printer. Text, as opposed to pictures, is often, if not always, delivered to the "printing engine" as a vector format, which is translated to a dot pattern according to the specifications of the printer. In other words: a printer with a higher dpi specification will give you crisper text on your print. The same is of course true for pictures delivered to the 'printing engine' in vector format. (Ensuring that a vector drawing does indeed profit from the maximum dpi of the printer seems to me an art in itself when you're not using Adobe software. Yet, this seems to me beyond the scope of this article)

- Obviously, to print a bitmap (that is: a file describing pixels and resulting from a scanner or camera), the pixel pattern has to be 'translated' to the dot-pattern of the printer. A "1:1 print" simply translates each pixel into the appropriate sizes of the dots in one 'colour group' (which may consist of 4, 5, 7 or even 12 different colours - but that's another article) Obviously, the better the printer, the smaller, and crisper, the "1:1" print. When enlarging a picture in print beyond "1:1", one pixel is spread over a number of groups of dots. Thus, the picture gets vaguer, up till the point where the individual pixels become visible. When reducing the size below "1:1", more then one pixel is available to determine the dot-sizes for the printer, resulting in a print as good as the printer can achieve. (Theoretically, one would expect there to be certain favourable proportions, e.g. "4 pixels to one colour-group', which is, theoretically, much easier to render then "1.7 pixels to one colour-group). However, in practice the rendering software is very sophisticated and there seems to be hardly any gain using such simple proportions. (Here again, I'm out of my depth; but maybe this is way besides the scope of this article anyway)

5. My comments on the article In view of the above, I have a number of comments on and questions about this article.

- "The DPI value tends to correlate with image resolution, but is related only indirectly." This seems to me unclear, if only because 'correlation' is not a term many people understand. But apart from that: DPI and PPI are either synonymous or (as I propose to use the terms) they have no correlation whatsoever, not even 'indirectly'.

- The article starts by explaining 'monitor resolution', which is the most problematic use of the term dpi. Bad idea, I think.

- "A less misleading term, therefore, is pixels per inch." I don't see what is misleading and I don't see the 'therefore'.

- "the measurement of the distance between the centers of adjacent groups of three dots/rectangles/squares on the CRT screen." 1. To my knowledge, there are no CRT-screens with squares or rectangles. 2. To my knowledge, there are no LCD screens with 'groups of squares' 3. Is this true? In other words, could the text be amended by "the measurement of the distance between the centers of adjacent groups of three dots on an CRT-screen, or between the centers of two squares (each consisting of 3 coloured rectangles) on a LCD screen."?

- "DPI is used to describe the resolution number of dots per inch in a digital print and the printing resolution of a hard copy print dot gain; the increase in the size of the halftone dots during printing. This is caused by the spreading of ink on the surface of the media." This sentence forms the heart of the article in the sense that it defines the term that is the title of the article. Yet it contains so many unclarities, that I'll have to take them one-by-one: "DPI is used to describe the resolution number of dots per inch" Should this not be either "to describe the resolution" or "to describe the number of dots per inch". "and the printing resolution of a hard copy print dot gain" I fail to see why this should be added; I have no idea what the difference is between a 'digital print', as described in the first part of the sentence, and a "hard copy print" as described in this second part. I have no idea what a 'dot gain' is. "the increase in the size of the halftone dots during printing" I don't know how this is connected to the statement before the semi-colon; I don't know what to make of 'halftone dots',nor what dpi has to do with "spreading of ink on the surface of the media".

In summary: this definition is totally unclear to me.

- "Up to a point, printers with higher DPI produce clearer and more detailed output." Up to which point?

- "A printer does not necessarily have a single DPI measurement; it is dependent on print mode, which is usually influenced by driver settings." This seems clumsily put; printers always have a maximum dpi (and this is not a measurement but a value). What dpi is effectively used depends on user's choices. (notably choosing 'economy mode')

- "An inkjet printer sprays ink through tiny nozzles, and is typically capable of 300-600 DPI.[1] A laser printer applies toner through a controlled electrostatic charge, and may be in the range of 600 to 1,800 DPI." As the definition of dpi is unclear, so are these statements. Yet, quoting higher values for laserprinters then for ink-jet printers seems to me doubtful in whatever sense the words are taken.

- "The DPI measurement of a printer often needs to be considerably higher than the pixels per inch (PPI) measurement of a video display in order to produce similar-quality output. " Is this so? Why? Why is this Often? How often?

- "This is due to the limited range of colours for each dot typically available on a printer. " Yes, very limited indeed: just one.

- "At each dot position, the simplest type of colour printer can print no dot, or a dot consisting of a fixed volume of ink in each of four colour channels" This is, ithink, not true. Even allowing for the fact that the word 'dot' is used here (very confusingly) to denote a 'dot-group' consisting of dots in all colours the printer is capable of, this is still unnecessarily opaque; I wouldn't know what a 'colour channel' is, for instance. Nor can I see how 'the simplest type of colour printer' works differently, in this respect, from the 'most advanced type of colour printer'. Nor is there any 'ink' in my laser-printer. Finally, the 'fixed volume of ink' is to my knowledge simply untrue; the whole point of colour printing is that the size of the dots (and thus the volume of the ink applied in the case of an ink-jet) varies.

- "typically CMYK with cyan, magenta, yellow and black ink) or 2e4 = 16 colours" This, too, is to me incomprehensible. Prints are made with dots of varying size (not with a 'fixed volume of ink'). The principle of colour printing is partly based on the fact that ink-and toner colours, like normal paint, can produce mixed colours when they overlap and also when they are spaced apart - a phenomenon well known to painters, notably the impressionists. Our eyes being able to discern just 3 colours, all possible colours can be achieved by printing dots of varying size in 3 colours (in practice 4 or more are used). I have no idea why the number '16' would be relevant here, nor where the formula comes from. In fact, I have no idea why all this is relevant to the article.

- "Higher-end inkjet printers can offer 5, 6 or 7 ink colours giving 32, 64 or 128 possible tones per dot location." Incomprehensible - see above.

- "Contrast this to a standard sRGB monitor where each pixel produces 256 intensities of light in each of three channels (RGB)." I have no idea what it is I am supposed to be contrasting here, but it DOES throw up a point: surely, the variation in size of a printer dot comes in discrete steps. I have, however, never seen this in the specification of a printer. Does anyone know more about this?

- "While some colour printers can produce variable drop volumes at each dot position, " Apart from the fact that this is, again, ink-jet-talk and that it is not the volume produced, but the dot-size produced which is relevant, I would like to know what colour printer is NOT able to do this.

- "the number of colours is still typically less than on a monitor." Why would that be? I can't follow the explanation. Nor can I see the relevance with respect to the explanation of the term 'dots per inch'.

- "if a 100×100-pixel image is to be printed inside a one-inch square, the printer must be capable of 400 to 600 dots per inch in order to accurately reproduce the image." This, I think, is true. But: First, the term 'accurately' seems unnecessarily vague. Hereabove, I used prnting "1:1" to denote printing a picture with one pixel translating to one 'colour-group' on the printer. (why not be precise instead of talking about 'accurately' or 'faithfully' and such) Second, the explanation leading up to this fact makes it seems like it's rocket science, while in fact it's quite trivial: To print 100 pixels "1:1", the printer uses 100 colour groups; in case of a four-colour printer, this means 400 coloured dots. In case of a 6-colour printer, this means 600 dots.

- Section "DPI or PPI in digital image files" No factual quibbles here; just the observation that the wording is imprecise, which doesn't help i.m.o. to further comprehension, and that some descriptions use difficult words unnecessarily. For example: "Some digital file formats record a DPI value, or more commonly a PPI (pixels per inch) value". Comment: Formats don't record anything, but computer programs do. Some formats offer the possibility of recording PPI - I do not know of ANY format offering the option of recording DPI. MANY computer programs confuse dpi and ppi and represent the ppi-value as dpi. A PPI value in the file only has meaning when recorded by a scanner program; camera's often record some value (Nikon gives 300 ppi, Canon gives 180 ppi) but these values are entirely without meaning. "If it is labeled as 250 PPI, " What does 'labeled' mean? Let's be precise. "An image may also be resampled to change the number of pixels". Incomprehensible seen the level of the article. Moreover: what has this to do with the explanation of "Dots per inch"?

-Precise wording can, I think, help the understanding. For example, instead of: "Changing the PPI to 100 in an image editing program would tell the printer to print it at a size of 10×10 inches. However, changing the PPI value would not change the size of the image in pixels which would still be 1,000 × 1,000." I would say: "Changing the PPI-setting in the 'description' part of an image file (which can be done with an image editing program) would tell the printer to print it at a size of 10×10 inches. This, of course, does not change the the pixels in the image file: the picture still consists of 1.000 x 1.000 pixels.

- Section "Computer monitor DPI standards" I.m.o. this is a good piece, drawing attention to what is indeed a major source of confusion. But the first time I read it I couldn't make head nor tails of it. While reading, I was waiting to learn what 'the problem' is and what 'confusion' was sown by Microsofts choice. I didn't get that, and I still don't get it. Furthermore, it doesnt help that the use of terms is a bit 'loose', while some odd expressions ("a resolution of 1 megapixels", "the intended size of the image") may put an unsuspecting reader on the wrong foot (as it did me). Also, to introduce the word 'vector image' the first time in the article with the sentence "For vector images, there is no equivalent of resampling an image when it is resized" seems quite inadequate. (there's not even a reference to some other article here) The core of the matter is not at all hard to describe: like I said hereabove: to make the screen image the same size as printed output, you need to know the ppi value of your monitor, and instruct the software accordingly. (here, now: I said it in one sentence) Overrating the PPI of the monitor leads, at least in serious graphics applications, to a 'larger-then-life' picture on-screen.(This, by the way, could be explained in some more detail) However, I still don't see any 'problem' or 'source of confusion'.

I suspect (but is this true?) that a temporary problem has been that Apple software, being written for Apple monitors, did not allow for the the software being instructed this way: it simply supposed the monitor was 72 PPI, Apple's 'standard value'. However, by now all Apple graphics software can be so adjusted. (can't it?) Windows, of course, had no say whatsoever over the PPI of the monitor used, and thus windows software always went with the value used by the OS, which could be adjusted by the user in accordance with the specs of his monitor. (To appease the Apple-fans: the windows graphical software was, at the time, years behind Apple graphics software.)

All in all, the only possible 'sources of confusion' I can see are these: - Microsoft consistently uses DPI for PPI (where 'PPI' stands for 'screen-pixels per inch') and the dialogue box used to adjust the value, insists that one sets "The DPI value of the screen". This, of course, is nonsensical, since the "DPI value of the screen" is a fixed given, not a user's choice. I bet this wording has spread lots of confusion; it should have been something like "What is the PPI-value of your monitor?", with an explanation about how to determine this value, and the consequences of making the software believe it is higher or lower then it actually is. - Second possible source of confusion: the many, many articles on the web making a big fuss about 'Apple's 72 dpi" vs "Microsofts 96 dpi", where in fact all this is of interest to IT-historians only, or so it would seem to me.

In summary: I think this section, as it stands, is long and doesn't add to comprehending the term "Dots per inch". Propose to strike, or else to re-write in such a way that it deals better with common misunderstandings about "72 dpi" and "96 dpi". — Preceding unsigned comment added by Mabel2 (talkcontribs) 17:20, 9 January 2012 (UTC)[reply]

72DPI (/ 96DPI)

[edit]

Section "Computer monitor DPI standards" is confusing and wrong, better to remove it. — Preceding unsigned comment added by 2.34.179.118 (talk) 10:46, 14 March 2013 (UTC)[reply]

I agree. The section just adds to the 96 dpi computer screen myth. (Or the 72 dpi myth.)
Draw a line, 960 pixels long. Measure it with a physical ruler. Is it 10 inches? No..?
Connect your laptop to your TV. Is the line 10 inches now? Still not?
Show the line on your iPhone. Still same size? Why not?
The section may explain a possible source for this myth, however. But should then reflect that.--Vbakke (talk) 23:27, 1 June 2013 (UTC)[reply]

...I don't know if it's just me being dumb, or what, but couldn't a whole load of the existing discussion on the article re: zoom levels and so on be replaced by a paraphrasing of "[...] meaning that on a Windows PC, to display the same piece of text with the same pixel resolution as on a Macintosh at '100%' zoom, on an otherwise identical display, the user would instead have to set a zoom of 75%".

You know, given that 1.3333r is the reciprocal of 0.75 and all... (ie, 1 divided by 4/3 is the same as 1 multiplied by 3/4...). The current version really seems to overcomplicate things, given that it's such a simple mathematical relationship with such a simple solution, should you need to obtain something closer to Mac-like output (well, you'd also need to adjust the H and V size pretty much to their minimum settings even on a relatively small PC monitor, and even considering the extra 128 H & 138 V pixels, given how tiny the original 128 and 512k monitors were).

The rest of the talk about how different monitors and video cards can produce output with variable real-life PPI, along with all the examples, as fascinating as they are for a techno-cruft historian, are pretty irrelevant overall, not to mention fairly self-evident to anyone who's seen more than one model of computer monitor in their lifetime. It may as well be a table of apparent PPI for different size SD and HD televisions.

Incidentally, the part in the lede about monitor pitch could probably also be cut down to "dot pitches from around 0.42mm to 0.22mm, decreasing over time, and mainly in the low 30s or high 20s", instead of the (still non-exhaustive) list of more than half a dozen very similar sizes starting at 0.39mm. Incidentally, I've owned a VGA monitor with a 0.42 pitch (coarse as hell, but it existed, was dirt cheap when bought, and was still sufficient, with barely any border on a 14" monitor with typical bezel thickness, for a reasonably clear 640x480 or 640x350 and a just-about-passable 720x400 thanks to the subpixel effects of mostly-monochrome text with tall pixels being displayed over multiple horizontally-offset triangular (hexagonal?) RGB clusters... 0.39 was more the practical max for actually-sharp textmode, and you needed a lower figure for SVGA (0.31? 0.34 for 15"?) or XGA (0.26 at 15", 0.31 at 17"?), but if you're mainly using Windows in VGA or playing games at 320x200 it's irrelevant), and I've reason to believe that wasn't even the worst, but I haven't much way of proving that. Also, it doesn't clarify that this only applied to colour monitors - the monochrome, single-phosphor, dotmask-less CRTs used in early Macs, for hi-rez output from the Atari ST and Amiga, EGA/VGA mono, MDA/Hercules, and in 15khz form as a cheap option for non-game/graphics use of various 8- and 16-bit home computers (including CGA PCs and PC Jrs) didn't HAVE any real concept of dot pitch, beyond the actual pixel output frequency, scanline count, and how tightly focused the electron beam was, all of which being somewhat arbitrary but typically giving a perceptually "clearer" and "sharper" appearance than the fuzzier-edged pixels of a typical colour TV or monitor. Certainly any suggestion of it being relevant to pre-colour Macs or to Hercules-equipped PCs is misleading. 193.63.174.254 (talk) 17:17, 14 March 2017 (UTC)[reply]

Move discussion in progress

[edit]

There is a move discussion in progress on Talk:Kilometres per hour which affects this page. Please participate on that page and not in this talk page section. Thank you. —RMCD bot 00:59, 10 December 2013 (UTC)[reply]

Can someone replace the "PPI vs DPI" image for me please?

[edit]

I've produced this rather more accurate one, but I don't have a Wiki Commons account and really don't want to end up adding to my teetering stack of one-hit accounts for no good reason. Could someone grab this and upload it in my stead? Take the credit if you like :p ... just so long as you also resist the temptation to rename it "Blue Balls.png" (even though that's better than the current file's name...)

It's currently hosted at: http://imgur.com/xwmMaAr

Thanks! 193.63.174.211 (talk) 11:07, 19 February 2014 (UTC)[reply]

Trying, you're credited as anonymous contributor. –Be..anyone (talk) 14:33, 6 January 2015 (UTC)[reply]

Discussion about dimensionality and computation

[edit]

I looked for this article to explain if "dots per inch" normally refers to a linear measurement (e.g. 100 horizontal pixels per horizontal inch) or to an area measurement (e.g. 100 pixels in a 1 inch by 1 inch square). Maybe confusion over this isn't very common, but the article never addresses this directly (I was able to figure out from some examples that it is linear). — Preceding unsigned comment added by 77.58.20.100 (talk) 22:15, 4 January 2015 (UTC)[reply]

One would normally assume that the measurement units given for a particular dimension also imply the dimensionality of the measurement, is the thing, and that's a universal feature whether talking about pixels per inch or glubelfarbs per cubic hectare. It's not really the place of an article like this to clear up basic mathematical or physical concepts for any random visitor who might have not quite grasped them yet.
Or to make it clearer: if you've got "something per inch", that's length. Linear. An inch is a linear measurement.
If you've got "something per square inch", that is area. Areal. Square inches are areal measurements.
And just to complete the usual set, for our three-dimensional world, if you have "something per cubic inch", that's volume. Volumetric. Cubic inches are volumetric measurements.
Additionally, as it's also something people often confuse, any of these suffixed with a time measurement is still measuring how many somethings you have per inch, square inch, or cubic inch, when you have movement or coverage or transfer happening in units of (square/cubic) inches per (time unit), and it's been a certain amount of time. It's not anything other than that.
(the very worst offender is confusing watt-hours with watts (whether unitary, kilo, mega, giga or more...), and particularly coming up with nonsense like "kilowatts per hour", or mismatching the unit dimensionality by saying that a 200GW generator could power a million homes for a day (or the energy it makes in one second could power a hundred thousand kettles) when either neglecting, or needlessly including the time element causes the comparison to be totally meaningless.
Watts are a rate measurement, I forget exactly what (har har) but something like coulombs per second (to be precise, volts multiplied by amps, in the electrical domain at least), an expression of work carried out - i.e. a transfer of a certain amount of energy - over a certain amount of time. Or in other words, how fast something happens, or in this case how much power is being applied. Like, if you're picking up identical balls of energy out of a basket and throwing them at a wall, and each one is worth a coulomb (or whatever... honestly, you're on Wikipedia, and not in the middle of an editing session, so just go look it up), then Watts measure how many balls you can throw per second. It's not something you can pick up and hold, but expresses how frequently the somethings are moving one way or the other.
Watt-hours (or -minutes, -seconds, -days, etc) are a cumulative measure, they express *total* energy transferred across whatever period or discrete event you're measuring. (Average) Power multiplied by the time that power was exerted for. How many of those energy balls you've thrown in total. It IS something you can pick up and have an amount of, but it says nothing at all about rate. If you've expended 1 watt-hour, then there'll be 3600 little squashy, sticky energy balls sitting in an untidy heap at the base of the wall you threw them at, after they impacted with an amusing squelch then rolled down it. However, we've no way of knowing whether they got there by you throwing two per second for thirty minutes, three hundred and sixty for ten seconds, or a little less than ten per day over the course of a year.
Anyway, to return to the original query, Dots/Pixels Per Inch is linear. Totally linear. It's a very common measurement used in all kinds of contexts, and it's never been anything other. The confusion might come from how usually only one figure is given, and it's often (erroneously) mentioned as "pixel density" (rather than... frequency, I guess?), but that's simply because it's most common for pixels, or ink dots, to be square (or at least, arranged on a square grid), thus PPI/DPI is the same both horizontally and vertically. Where this ISN'T the case, you'll find that, if the person writing the spec sheet is behaving themselves, two different numbers are given, one for horizontal and one for vertical. This is normally only the case with printers, as digital displays largely standardised on square pixels a long time ago, but physical limitations tend to make it far easier to increase effective dot resolution in one direction relative to the paper transport path vs the other (which is generally not much of an issue unless you're printing something super-detailed and have some freedom to choose which way the paper is oriented in the printer, but standards, inertia and customer aversion to unfamiliar things mean they still stick with HxV dpi rather than giving areal density) - also the larger is almost always an integer or large-fraction multiple of the smaller, e.g. 1200x600, 5760x1440, and, from way back in the day, 240x216 on a 9-pin dot matrix.
There have been some displays with tall or wide pixels, but either they were analogue devices where the true rez was either variable or not ever really shown to the public (most especially TVs, where you got the daft case of being told either that they were "525-line" (or 405, 625 etc) when probably 10% of those had no picture data and another 10% were lost to overscan, or that they had some particular number of "TV Lines" (a completely gonzo figure using a Kell-factor corrected count of discernible black and white vertical lines on the screen over a width equal to the picture height... or something)) and the best you could expect was a dot pitch (ie the *distance* between the centres of neighbouring subpixel triplets, with it not being entirely clear whether that was the maximum or minimum radius) that you'd have to divide the *visible* screen dimensions into, usually after measuring them for yourself; OR they were digital panels where what you got was either just character counts (cols x rows) or the native pixel resolution plus a diagonal size in inches, with no real hint as to either the frame or the pixel aspect ratio (ie width divided by height - 1.0 for square, 1.33 for a typical 4:3 frame, 1.125 (9:8) for the pixels on a notional digital NTSC TV grid...). Sometimes they'd even be cheeky enough to count the SUBpixels as separate elements in their own right, especially on small-format LCDs where the triplets were arranged Trinitron-style (ie rectangles with vertical stripes of red, green, blue), though there was some justification in that they would be driven entirely independently to give, say, that supposed "480x232" resolution (with each triplet being somewhat fat, but each monochromatic element quite thin) rather than each group of three displaying the colour of a single coarse pixel, which could somewhat improve the perceived rez (especially for greyscale or desaturated material) at the expense of some chromatic aberration. ((This also happens with the triangular/hexagonal/bayer arrangement on e.g. camcorder and digital camera screens, and more egregiously the CCD sensors, but in that case you get a total pixel count as it can be hard to pin down exactly what the horizontal or vertical count even IS; typically, for true full-colour resolution, take the stated pixel/kilopixel/megapixel figure and divide it by 3... if you happen to know, or are able to measure the sensor or screen dimensions, turn that into an area and divide the number of sub or full pixels into it for the density))
Therefore, if what you want is areal, ACTUAL "pixel density", then what you need to do is -
  • For printers (or scanners) where an "HxV" pair is given... multiply them together. Naturally this could get quite large; the modest 200x100 of a generic fax machine in default quality mode is still 20,000 dots per square inch. For an inkjet photo printer (or a film scanner) that might claim as high as 7200x3600, well, I can't actually be bothered working it out exactly, but it's probably in the region of 28,000,000 dpi.
  • For printers (or scanners) with only a single dpi figure given, or computer displays specified in ppi, just square that number. So "1200dpi" becomes 1,440,000 dpsi, and "250 ppi" becomes 62,500 ppsi.
  • Displays giving only a dot pitch, which for the most part will only be CRTs but does include some LCDs, divide 25.4 by the quoted figure to get ppi (dot pitch is given in millimetres, of which there are 25.4 to the inch), then square it - and accept that it's not going to be particularly accurate. EG a 0.28 dot pitch monitor has a linear resolution of just over 90ppi, and an areal density of 8229ppsi.
  • Displays only giving total pixel counts and a diagonal: you're going to have to bust out the tape measure, because manufacturers often twist the truth about screen size, and aspect isn't always guaranteed. Though if you just want to ballpark it, then assume square pixels, get the aspect ratio by dividing width into height, and ... uh .... Pythagoras or something. IDK. It gets difficult, to be honest and I've never found a satisfactory easy way of working it out. There are some shortcuts, particularly with the most common ratios (5:4, 4:3, 16:10 and 16:9), but at some point you're likely going to have to reduce it to an X:1 ratio, add 1 to the square of X (as a^2 plus b^2 = c^2, and 1^2 is 1), and take the square root of the answer to get the diagonal vs short edge ratio... Anyway, divide the diagonal size in inches by the diagonal factor, multiply by the short or long edge one (where it isn't "1"), and divide the number of pixels for that edge into the length to get the linear pixel frequency. Square THAT to get the areal density. Phew.
(5:4 works out to root(25+16), or 6.403 for the diagonal - divide by that then multiply by 5 or 4. 4:3 is the simple one, as the diagonal is 5 (hopefully familiar from school?). 16:10 gives 18.868, and 16:9, via root(256+81), produces 18.358 - or in other words, in the 16:10 case, a 19" diagonal screen with a resolution of 1440x900, like what I'm using right now, is roughly 14.4 by 9 inches outline (or precisely, assuming the diag is EXACTLY 19.00", 16.11 inches wide by 10.07 inches high) with a resolution of 89.4 ppi and a density of 7988 ppsi. The 17 inch 5:4 monitor next to it, at 1280x1024, comes out to a slightly higher and Bill Gates pleasing 96.4ppi or 9297ppsi... again, assuming the diagonal is exact.)
((if you want to figure it for non-square pixel screens under similar conditions, you're on your own, and I highly recommend just measuring the physical device if at all possible because there's no guarantee what the actual pixel or frame ratios might be, and there's a good chance they were fudged to fit within a particular size case or modified from some older design that was all round numbers and standard ratios originally but totally isn't any more thanks to, say, having 130 of its 480 vertical pixels sheared off simply because the connected video card can't render any more than 350 lines. Grab a ruler and a calculator, work out the ppi in each dimension, and multiply them.)) 193.63.174.254 (talk) 19:11, 14 March 2017 (UTC)[reply]

Proposed Metrication

[edit]

There is an odd line at the beginning of this section that is totally non sequitur:

For audio compression, see DPCM.

Huh? I think it should be removed. — Preceding unsigned comment added by Rwillfrd (talkcontribs) 15:36, 8 May 2015 (UTC)[reply]

Pixel size and font size

[edit]

The article now says (diff):

a 12-point font was represented with 12 pixels on a Macintosh, and 16 pixels (or a physical display height of maybe 19/72 of an inch) on a Windows platform at the same zoom

19/72 of an inch is 6.7 mm, that would imply the phisical pixel size of a display of 0.42 mm (6.7 mm divided by 16 pixels). That is the pixel size being larger than the typographic point (0.35 mm). But I never knew about such large pixel sizes today, I think such displays have not been being produced for the last 25 years at least (some of the last).

Monitors for the last 20 years may have had such usual pixel sizes:

  • 14", 800*600, 0.36 mm
  • 15", 800*600, 0.38 mm
  • 17", 1024*768, 0.34 mm
  • 17", 1280*1024, 0.26 mm
  • 18,5", 1366*768, 0.30 mm

And so on. Most modern displays are aimed at 0.26 mm (96 DPI) pixel size or less.

I can only imagine some older small 14" monitors with the maximum resolution 640*480, that gives 0.45 mm. Or modern ≥36" LCD panels with no more than the 1920*1080 resolution. But I'm doubt that such large displays are used for ordinary work.

If we imagine the early days of MS Windows (the mid-1990s) with 15" monitors being most widespread (I believe), 16 pixels would be 6.08 mm (17/72") and 13 pixels 4.94 mm (14/72"). But everything depends on the actual pixel size of an actual monitor, so actual physical font sizes must vary greatly (if such a word can be applied to mm).--Lüboslóv Yęzýkin (talk) 17:54, 6 December 2015 (UTC)[reply]

All that is perfectly true, and you have done good maths. However you haven't fully considered something, here - it says "was". As in, we're thinking about the state of affairs sometime around 1990 when Microsoft made this decision, and VGA monitors of 13, 14, or 15 inches and resolutions of 640x480 in 16 colours were hot stuff. Also, the "dot pitch" is a red herring here, as in the early 90s we're talking exclusively about CRTs - whilst laptops with LCD panels do exist, the screens are all pretty small (under 10" diagonal) and often had non-square pixels, which messes things up somewhat. All the pitch tells us is how fine or coarse the phosphor pattern is on the screen itself, and so how crisp or blurry the output will be... and it only applies to colour monitors. Monochrome ones don't *have* a pitch, as their business end is literally just a uniform layer of phosphor that the electron beam is scanned across, and the size of the glowing spot is up to whoever calibrated the tube at the factory.
A typical "14 inch" monitor might have done well to actually provide a usable image of 13 inches diagonal, between bezel, tube roundness, and the need to leave a small border between the edges of the generated image and the physical frame, or realistically more like 12.5 (which makes the maths easier). This would therefore, following Pythagorean theorem, be 10 inches wide (25.4cm) by 7.5 inches high. Or, at 640 pixels wide, 64 ppi in real terms.
16 pixels, at 64 ppi, is a quarter of an inch. Or in other words, 18/72ths, which is very close to 19/72ths. If we use the difference between them to imagine how large a diagonal we ACTUALLY need to make that (rather arbitrary-seeming, I agree) size come true, we get 12.5 x (19/18), or near as dammit 13.2 inches. This could be achieved with a narrow bezeled 14" adjusted to give a borderless display, and would be well within the capabilities of a 15" monitor. And yes, it would be pretty coarse-looking, but, guess what... that's actually what old-skool computer monitors were really like. VGA looked amazing when you first saw it, because you'd never seen anything that GOOD before, even if it's really clunky and blocky by modern standards. It was still good enough, on a mid-90s laptop I picked up for a laughably low price in the early noughties, to work just fine for basic word processor hacking, so long as I ran it full-screened with a minimal toolbar, and could even stand up to some spreadsheet and powerpoint preparation work and PDF display, though those all did require pretty frequent use of the zoom in/out hotkeys. (...well, OK, slight cheat there as I almost immediately upgraded it from Win 3.1 to Win 95, and hacked the Plus Pack font antialiasing engine into it, but at such low resolutions it only made a minor difference, and it still switched off completely for small font sizes and low zoom factors as it was more for prettification than readability).
But if you think that's bad, just remember the Apple standard was essentially akin to zooming out to 75% on that Windows machine, playing with the monitor controls to reduce the area covered by about 8% on each axis, then physically masking off 20% of the horizontal and nearly 30% of the vertical area of that reduced image. Oh, and the screen is VGA monochrome, not colour. As in, 1 bit black and white, not even the 64 shades of grey you might get from using a colour-to-mono greyscale adaptor cable. Really, the fact that they made the fonts look a bit bigger in order to make them easier to read AND give you a better idea, for the majority of the time you were editing, of how large things would "feel" on the printed page when read at a normal distance (as you were sat further away from the screen than you would hold a printout), by taking the opportunity to make full use of the greater pixel resolution and physical monitor size, is a pretty small consideration vs just how small, cramped, and grainy (and colourless) things would feel if you went from there to a Mac Classic.
(Yes, by then, Apple were already making Macintoshes with larger, higher resolution screens, either as standard or as options, but on the whole they were no larger or higher rez than a typical PC VGA unless you spent SERIOUS money, and a lot of them were in fact somewhere in-between VGA and the Classic, which was itself still in production. And in any case, if you were using one of those, what were you most likely going to do if document editing - leave it at 100% zoom and suffer blocky fonts, with a wide unused border each side of the page, just so you could see a few more lines of text at once ... or increase the zoom level until the text roughly filled the full width of the page, to maybe 125, 133 or 150%, giving exactly the same effect?) — Preceding unsigned comment added by 193.63.174.254 (talk) 20:03, 14 March 2017 (UTC)[reply]
Besides, if you were going to do a serious amount of text hacking at the time, and especially if you were working in an office using a company machine, you were probably going to use a Hercules or some compatible clone, which would give you 720 pixels across (though only 350 vertical, admittedly, and in one-bit mono) and thus another 20% wider view of the page vs VGA itself (more than 40% vs Mac - more than making up for the 33% magnification), on a monitor that was typically about halfway between the two other standards (...and didn't even have any concept of dot pitch). The one I used for a while after being given an old 286 office PC to mess with by a family friend was 12". Though I'm not going to trouble with working it out now, I wouldn't be surprised to find that overall the actual output would have then appeared more or less 100% true to life... and at the time it was far easier to just tweak the analogue monitor sizing controls to make the image fit with a printed page you wanted to compare against, rather than agonizing over whether 97 or 96% zoom was closer.
And, well, Microsoft weren't dumb, they knew their market, and indeed did enough of the same work themselves. People with VGA colour were probably home users, schools, or dabbling with full-page design, so total accuracy wasn't so important vs making it look big and bold and flashy, or suited to DTP. Those who were preparing text documents as a job, if they weren't still using trusty old 80x25 textmode, more than likely had Hercules. The driver is right there as part of the defaults provided on the Windows 3.1 installation discs, along with EGA and VGA (and the mysterious 720x512, 16 colour "Video 7" mode which to this day I've never found out the origin of, or managed to find a card it works with). The reduction in simultaneously visible lines of text whilst in the middle of a copy-generation, transcribing, dictation or linear editing task wouldn't have been too much of an issue; at 16 pixels high, on a 350 line screen, you can still see probably 15 lines at once between the title bar, menu, toolbar and status line, and that's almost as many as I can see in this edit box right now. If you wanted to get a feel for the whole page layout, then you could zoom out to whatever arbitrary scale allowed it, and not being able to make out anything more than the largest headlines didn't really matter, as you could zoom back in to make any actual text edits. Or, yknow, get your wallet out and buy a high-rez full-page portrait monitor and matching video card ;)
Everything that came after that point, with the extension into SVGA, XGA, SXGA, widescreen, increasing monitor sizes and capabilities, LCDs, etc is all moot, because this happened years prior to all those things, and was a design decision taken by a company (director?) who didn't have a crystal ball, but did know how to make best use of their (his?) chosen platform's unique capabilities and advantages over their main rival. 193.63.174.254 (talk) 19:56, 14 March 2017 (UTC)[reply]

Merge tag removed

[edit]

I have removed a merge tag from Oct 2014 from Dots per inch#Computer monitor DPI standards. The merger proposal had no mirrored tag at the proposed target Pixel density, and this article already refers to Pixel density in a hatnote. This is without prejudice to any other editor proposing the merge again per WP:MERGE. Shhhnotsoloud (talk) 15:56, 12 May 2017 (UTC)[reply]

Typo in caption to "blue balls" figure

[edit]

the caption refers to a grid of 60x60 and then describes this as giving 36 points it seems obvious to me that the text was intended to read 6x6 not 60x60 so I have edited the text with that change. — Preceding unsigned comment added by 67.210.40.116 (talk) 15:20, 9 June 2019 (UTC)[reply]

DPI against PPI

[edit]
  • DPI (Dots Per Inch) is a unit of measure that only exist in a physical world, a dot on something physical; paper, plastic, metal etc. It can NEVER exist in a digital world.
  • PPI (Pixel Per Inch) is a unit of measure that only exist in a digital world. It can never exist in a physical world world.

In my opinion those two units are used in such a way that a reader will believe that each of them can exist in both worlds e.a. "Dots per inch (DPI, or dpi[1]) is a measure of spatial printing, video or image scanner dot density".
DPI can not exist in "video or image scanner"
I haven't read the whole article (I'm a graphic worker here and at commons) but just wants to point out this so the article can be considered to be reviewed. --always ping me-- Goran tek-en (talk) 11:55, 13 November 2021 (UTC)[reply]

1000

[edit]

@#### 103.174.45.11 (talk) 11:51, 13 July 2022 (UTC)[reply]

English

[edit]

https://en.wikipedia.org/wiki/Talk:Dots_per_inch#c-68.225.92.194-2009-08-22T00:36:00.000Z-DPI_printout 2409:4088:CE8F:D441:0:0:8FCB:4D16 (talk) 05:57, 19 August 2024 (UTC)[reply]