What do call a device that converts printed material such as text and pictures into a form the computer can use?

The computer keyboard is used to enter text information into the computer, as when you type the contents of a report. The keyboard can also be used to type commands directing the computer to perform certain actions. Commands are typically chosen from an on-screen menu using a mouse, but there are often keyboard shortcuts for giving these same commands.

In addition to the keys of the main keyboard (used for typing text), keyboards usually also have a numeric keypad (for entering numerical data efficiently), a bank of editing keys (used in text editing operations), and a row of function keys along the top (to easily invoke certain program functions). Laptop computers, which don’t have room for large keyboards, often include a “fn” key so that other keys can perform double duty (such as having a numeric keypad function embedded within the main keyboard keys).

Improper use or positioning of a keyboard can lead to repetitive-stress injuries. Some ergonomic keyboards are designed with angled arrangements of keys and with built-in wrist rests that can minimize your risk of RSIs.

Most keyboards attach to the PC via a PS/2 connector or USB port (newer). Older Macintosh computers used an ABD connector, but for several years now all Mac keyboards have connected using USB.

Pointing Devices

The graphical user interfaces (GUIs) in use today require some kind of device for positioning the on-screen cursor. Typical pointing devices are: mouse, trackball, touch pad, trackpoint, graphics tablet, joystick, and touch screen.

Pointing devices, such as a mouse, connected to the PC via a serial ports (old), PS/2 mouse port (newer), or USB port (newest). Older Macs used ADB to connect their mice, but all recent Macs use USB (usually to a USB port right on the USB keyboard).

Mouse

What do call a device that converts printed material such as text and pictures into a form the computer can use?

PC Keyboard (you have one in front of you that you can see for a closer look)

In order to continue enjoying our site, we ask that you confirm your identity as a human. Thank you very much for your cooperation.

Device that optically scans images, printed text

What do call a device that converts printed material such as text and pictures into a form the computer can use?

A flatbed scanner. Documents or images are placed face-down beneath the cover (shown closed here).

An image scanner—often abbreviated to just scanner—is a device that optically scans images, printed text, handwriting or an object and converts it to a digital image. Commonly used in offices are variations of the desktop flatbed scanner where the document is placed on a glass window for scanning. Hand-held scanners, where the device is moved by hand, have evolved from text scanning "wands" to 3D scanners used for industrial design, reverse engineering, test and measurement, orthotics, gaming and other applications. Mechanically driven scanners that move the document are typically used for large-format documents, where a flatbed design would be impractical.

Modern scanners typically use a charge-coupled device (CCD) or a contact image sensor (CIS) as the image sensor, whereas drum scanners, developed earlier and still used for the highest possible image quality, use a photomultiplier tube (PMT) as the image sensor. A rotary scanner, used for high-speed document scanning, is a type of drum scanner that uses a CCD array instead of a photomultiplier. Non-contact planetary scanners essentially photograph delicate books and documents. All these scanners produce two-dimensional images of subjects that are usually flat, but sometimes solid; 3D scanners produce information on the three-dimensional structure of solid objects.

Digital cameras can be used for the same purposes as dedicated scanners. When compared to a true scanner, a camera image is subject to a degree of distortion, reflections, shadows, low contrast, and blur due to camera shake (reduced in cameras with image stabilization). Resolution is sufficient for less demanding applications. Digital cameras offer advantages of speed, portability and non-contact digitizing of thick documents without damaging the book spine. In 2010 scanning technologies were combining 3D scanners with digital cameras to create full-color, photo-realistic 3D models of objects.[1]

Scans are usually downloaded by a computer the unit is attached to. Some scanners are able to store scans on standalone flash media (e.g. memory cards and USB sticks).[2]

In the biomedical research area, detection devices for DNA microarrays are called scanners as well. These scanners are high-resolution systems (up to 1 µm/ pixel), similar to microscopes. The detection is done via CCD or a photomultiplier tubes.

History of scanners

What do call a device that converts printed material such as text and pictures into a form the computer can use?

Pantelegraph

What do call a device that converts printed material such as text and pictures into a form the computer can use?

Caselli's pantelegraph mechanism

What do call a device that converts printed material such as text and pictures into a form the computer can use?

Belinograph BEP2V wirephoto machine by Edouard Bélin, 1930

Modern scanners are considered the successors of early telephotography and fax input devices.

The pantelegraph (Italian: pantelegrafo; French: pantélégraphe) was an early form of facsimile machine transmitting over normal telegraph lines developed by Giovanni Caselli, used commercially in the 1860s, that was the first such device to enter practical service. It used electromagnets to drive and synchronize movement of pendulums at the source and the distant location, to scan and reproduce images. It could transmit handwriting, signatures, or drawings within an area of up to 150 × 100 mm.

Édouard Belin's Belinograph of 1913, scanned using a photocell and transmitted over ordinary phone lines, formed the basis for the AT&T Wirephoto service. In Europe, services similar to a wirephoto were called a Belino. It was used by news agencies from the 1920s to the mid-1990s, and consisted of a rotating drum with a single photodetector at a standard speed of 60 or 120 rpm (later models up to 240 rpm). They send a linear analog AM signal through standard telephone voice lines to receptors, which synchronously print the proportional intensity on special paper. Color photos were sent as three separated RGB filtered images consecutively, but only for special events due to transmission costs.

Types

Drum

What do call a device that converts printed material such as text and pictures into a form the computer can use?

The first image scanner developed for use with a computer was a drum scanner. It was built in 1957 at the US National Bureau of Standards by a team led by Russell A. Kirsch. The first image ever scanned on this machine was a 5 cm square photograph of Kirsch's then-three-month-old son, Walden. The black and white image had a resolution of 176 pixels on a side.[3]

Drum scanners capture image information with photomultiplier tubes (PMT), rather than the charge-coupled device (CCD) arrays found in flatbed scanners and inexpensive film scanners. "Reflective and transmissive originals are mounted on an acrylic cylinder, the scanner drum, which rotates at high speed while it passes the object being scanned in front of precision optics that deliver image information to the PMTs. Modern color drum scanners use three matched PMTs, which read red, blue, and green light, respectively. Light from the original artwork is split into separate red, blue, and green beams in the optical bench of the scanner with dichroic filters."[4] Photomultipliers offer superior dynamic range and for this reason drum scanners can extract more detail from very dark shadow areas of a transparency than flatbed scanners using CCD sensors. The smaller dynamic range of the CCD sensors, versus photomultiplier tubes, can lead to loss of shadow detail, especially when scanning very dense transparency film.[5] While mechanics vary by manufacturer, most drum scanners pass light from halogen lamps though a focusing system to illuminate both reflective and transmissive originals.

The drum scanner gets its name from the clear acrylic cylinder, the drum, on which the original artwork is mounted for scanning. Depending on size, it is possible to mount originals up to 20 by 28 inches (510 mm × 710 mm), but maximum size varies by manufacturer. "One of the unique features of drum scanners is the ability to control sample area and aperture size independently. The sample size is the area that the scanner encoder reads to create an individual pixel. The aperture is the actual opening that allows light into the optical bench of the scanner. The ability to control aperture and sample size separately are particularly useful for smoothing film grain when scanning black-and-white and color negative originals."[4]

While drum scanners are capable of scanning both reflective and transmissive artwork, a good-quality flatbed scanner can produce good scans from reflective artwork. As a result, drum scanners are rarely used to scan prints now that high-quality, inexpensive flatbed scanners are readily available. Film, however, is where drum scanners continue to be the tool of choice for high-end applications. Because film can be wet-mounted to the scanner drum, which enhances sharpness and masks dust and scratches, and because of the exceptional sensitivity of the PMTs, drum scanners are capable of capturing very subtle details in film originals.

The situation as of 2014[update] was that only a few companies continued to manufacture and service drum scanners. While prices of both new and used units dropped from the start of the 21st century, they were still much more costly than CCD flatbed and film scanners. Image quality produced by flatbed scanners had improved to the degree that the best ones were suitable for many graphic-arts operations, and they replaced drum scanners in many cases as they were less expensive and faster. However, drum scanners with their superior resolution (up to 24,000 PPI), color gradation, and value structure continued to be used for scanning images to be enlarged, and for museum-quality archiving of photographs and print production of high-quality books and magazine advertisements. As second-hand drum scanners became more plentiful and less costly, many fine-art photographers acquired them.

Flatbed

This type of scanner is sometimes called a reflective scanner because it works by shining white light onto the object to be scanned and reading the intensity and color of light that is reflected from it, usually a line at a time. They are designed for scanning prints or other flat, opaque materials but some have available transparency adapters, which for a number of reasons, in most cases, are not very well suited to scanning film.[6]

CCD scanner

"A flatbed scanner is usually composed of a glass pane (or platen), under which there is a bright light (often xenon, LED or cold cathode fluorescent) which illuminates the pane, and a moving optical array in CCD scanning. CCD-type scanners typically contain three rows (arrays) of sensors with red, green, and blue filters."[7]

CIS scanner

What do call a device that converts printed material such as text and pictures into a form the computer can use?

Scanner unit with CIS. A: assembled, B: disassembled; 1: housing, 2: light conductor, 3: lenses, 4: chip with two RGB-LEDs, 5: CIS

Contact image sensor (CIS) scanning consists of a moving set of red, green and blue LEDs strobed for illumination and a connected monochromatic photodiode array under a rod lens array for light collection. "Images to be scanned are placed face down on the glass, an opaque cover is lowered over it to exclude ambient light, and the sensor array and light source move across the pane, reading the entire area. An image is therefore visible to the detector only because of the light it reflects. Transparent images do not work in this way, and require special accessories that illuminate them from the upper side. Many scanners offer this as an option."[7]

Film

What do call a device that converts printed material such as text and pictures into a form the computer can use?

DSLR camera and slide scanner

This type of scanner is sometimes called a slide or transparency scanner and it works by passing a narrowly focused beam of light through the film and reading the intensity and color of the light that emerges.[6] "Usually, uncut film strips of up to six frames, or four mounted slides, are inserted in a carrier, which is moved by a stepper motor across a lens and CCD sensor inside the scanner. Some models are mainly used for same-size scans. Film scanners vary a great deal in price and quality."[8] The lowest-cost dedicated film scanners can be had for less than $50 and they might be sufficient for modest needs. From there they inch up in staggered levels of quality and advanced features upward of five figures. "The specifics vary by brand and model and the end results are greatly determined by the level of sophistication of the scanner's optical system and, equally important, the sophistication of the scanning software."[9]

Roller scanner

Scanners are available that pull a flat sheet over the scanning element between rotating rollers. They can only handle single sheets up to a specified width (typically about 210 mm, the width of many printed letters and documents), but can be very compact, just requiring a pair of narrow rollers between which the document is passed. Some are portable, powered by batteries and with their own storage, eventually transferring stored scans to a computer over a USB or other interface.

3D scanner

3D scanners collect data on the three-dimensional shape and appearance of an object.

Planetary scanner

Planetary scanners scan a delicate object without physical contact.

Hand

Hand scanners are moved over the subject to be imaged by hand. There are two different types: document and 3D scanners.

Hand document scanner

What do call a device that converts printed material such as text and pictures into a form the computer can use?

A hand scanner with its interface module.

Hand-held document scanners are manual devices that are dragged across the surface of the image to be scanned by hand. Scanning documents in this manner requires a steady hand, as an uneven scanning rate produces distorted images; an indicator light on the scanner indicates if motion is too fast. They typically have a "start" button, which is held by the user for the duration of the scan; some switches to set the optical resolution; and a roller, which generates a clock pulse for synchronization with the computer. Older hand scanners were monochrome, and produced light from an array of green LEDs to illuminate the image";[8] later ones scan in monochrome or color, as desired. A hand scanner may have a small window through which the document being scanned could be viewed. In the early 1990s many hand scanners had a proprietary interface module specific to a particular type of computer, such as an Atari ST or Commodore Amiga. Since the introduction of the USB standard, it is the interface most commonly used. As hand scanners are much narrower than most normal document or book sizes, software (or the end user) needed to combine several narrow "strips" of scanned document to produce the finished article.

Inexpensive portable battery-powered "glide-over" hand scanners, typically capable of scanning an area as wide as a normal letter and much longer remain available as of 2014[update].

Hand 3D scanner

Handheld 3D scanners are used in industrial design, reverse engineering, inspection and analysis, digital manufacturing and medical applications. "To compensate for the uneven motion of the human hand, most 3D scanning systems rely on the placement of reference markers, typically adhesive reflective tabs that the scanner uses to align elements and mark positions in space."[8]

Portable

Image scanners are usually used in conjunction with a computer which controls the scanner and stores scans. Small portable scanners, either roller-fed or "glide-over" hand-operated, operated by batteries and with storage capability, are available for use away from a computer; stored scans can be transferred later. Many can scan both small documents such as business cards and till receipts, and letter-sized documents.

Keyboard document scanner

What do call a device that converts printed material such as text and pictures into a form the computer can use?

Example of the Imaging Keyboard-Scanner

A document scanner embedded inside a computer keyboard makes it available when needed yet taking no extra space since it is built inside the computer keyboard.

Smartphone scanner app

The higher-resolution cameras fitted to some smartphones can produce reasonable quality document scans by taking a photograph with the phone's camera and post-processing it with a scanning app, a range of which are available for most phone operating systems, to whiten the background of a page, correct perspective distortion so that the shape of a rectangular document is corrected, convert to black-and-white, etc. Many such apps can scan multiple-page documents with successive camera exposures and output them either as a single file or multiple page files. Some smartphone scanning apps can save documents directly to online storage locations, such as Dropbox and Evernote, send via email or fax documents via email-to-fax gateways.

Smartphone scanner apps can be broadly divided into three categories:

  1. Document scanning apps primarily designed to handle documents and output PDF, and sometimes JPEG, files
  2. Photo scanning apps that output JPEG files, and have editing functions useful for photo rather than document editing;
  3. Barcode-like QR code scanning apps that then search the internet for information associated with the code.[10]

Scan quality

Color scanners typically read RGB (red-green-blue color) data from the array. This data is then processed with some proprietary algorithm to correct for different exposure conditions, and sent to the computer via the device's input/output interface (usually USB, previous to which was SCSI or bidirectional parallel port in older units).

Color depth varies depending on the scanning array characteristics, but is usually at least 24 bits. High quality models have 36-48 bits of color depth.

Another qualifying parameter for a scanner is its resolution, measured in pixels per inch (ppi), sometimes more accurately referred to as Samples per inch (spi). Instead of using the scanner's true optical resolution, the only meaningful parameter, manufacturers like to refer to the interpolated resolution, which is much higher thanks to software interpolation. As of 2009[update], a high-end flatbed scanner can scan up to 5400 ppi and drum scanners have an optical resolution of between 3,000 and 24,000 ppi.

"Effective resolution" is the true resolution of a scanner, and is determined by using a resolution test chart. The effective resolution of most all consumer flatbed scanners is considerably lower than the manufactures' given optical resolution. Example is the Epson V750 Pro with an optical resolution given by manufacturer as being 4800dpi and 6400dpi (dual lens),[11] but tested "According to this we get a resolution of only about 2300 dpi - that's just 40% of the claimed resolution!"[12] Dynamic range is claimed to be 4.0 Dmax, but "Regarding the density range of the Epson Perfection V750 Pro, which is indicated as 4.0, one must say that here it doesn't reach the high-quality [of] film scanners either."[12]

Manufacturers often claim interpolated resolutions as high as 19,200 ppi; but such numbers carry little meaningful value, because the number of possible interpolated pixels is unlimited and doing so does not increase the level of captured detail.

The size of the file created increases with the square of the resolution; doubling the resolution quadruples the file size. A resolution must be chosen that is within the capabilities of the equipment, preserves sufficient detail, and does not produce a file of excessive size. The file size can be reduced for a given resolution by using "lossy" compression methods such as JPEG, at some cost in quality. If the best possible quality is required lossless compression should be used; reduced-quality files of smaller size can be produced from such an image when required (e.g., image designed to be printed on a full page, and a much smaller file to be displayed as part of a fast-loading web page).

Purity can be diminished by scanner noise, optical flare, poor analog to digital conversion, scratches, dust, Newton's rings, out of focus sensors, improper scanner operation, and poor software. Drum scanners are said to produce the purest digital representations of the film, followed by high end film scanners that use the larger Kodak Tri-Linear sensors.

The third important parameter for a scanner is its density range (Dynamic Range) or Drange (see Densitometry). A high density range means that the scanner is able to record shadow details and brightness details in one scan. Density of film is measured on a base 10 log scale and varies between 0.0 (transparent) and 5.0, about 16 stops.[13] Density range is the space taken up in the 0 to 5 scale, and Dmin and Dmax denote where the least dense and most dense measurements on a negative or positive film. The density range of negative film is up to 3.6d,[13] while slide film dynamic range is 2.4d.[13] Color negative density range after processing is 2.0d thanks to compression of the 12 stops into a small density range. Dmax will be the densest on slide film for shadows, and densest on negative film for highlights. Some slide films can have a Dmax close to 4.0d with proper exposure, and so can black-and-white negative film.

Consumer-level flatbed photo scanners have a dynamic range in the 2.0–3.0 range, which can be inadequate for scanning all types of photographic film, as Dmax can be and often is between 3.0d and 4.0d with traditional black-and-white film. Color film compresses its 12 stops of a possible 16 stops (film latitude) into just 2.0d of space via the process of dye coupling and removal of all silver from the emulsion. Kodak Vision 3 has 18 stops. So, color negative film scans the easiest of all film types on the widest range of scanners. Because traditional black-and-white film retains the image creating silver after processing, density range can be almost twice that of color film. This makes scanning traditional black-and-white film more difficult and requires a scanner with at least a 3.6d dynamic range, but also a Dmax between 4.0d to 5.0d. High-end (photo lab) flatbed scanners can reach a dynamic range of 3.7, and Dmax around 4.0d. Dedicated film scanners [14] have a dynamic range between 3.0d–4.0d.[13] Office document scanners can have a dynamic range of less than 2.0d.[13] Drum scanners have a dynamic range of 3.6–4.5.

By combining full-color imagery with 3D models, modern hand-held scanners are able to completely reproduce objects electronically. The addition of 3D color printers enables accurate miniaturization of these objects, with applications across many industries and professions.

For scanner apps, the scan quality is highly dependent on the quality of the phone camera and on the framing chosen by the user of the app.[15]

Computer connection

What do call a device that converts printed material such as text and pictures into a form the computer can use?

A photographic print being scanned into a computer at the photo desk of the Detroit News in the early 1990s.

Scans must virtually always be transferred from the scanner to a computer or information storage system for further processing or storage. There are two basic issues: (1) how the scanner is physically connected to the computer and (2) how the application retrieves the information from the scanner.

Direct physical connection to a computer

The file size of a scan can be up to about 100 megabytes for a 600 DPI 23 x 28 cm (9"x11") (slightly larger than A4 paper) uncompressed 24-bit image. Scanned files must be transferred and stored. Scanners can generate this volume of data in a matter of seconds, making a fast connection desirable.

Scanners communicate to their host computer using one of the following physical interfaces, listing roughly from slow to fast:

  • Parallel port - Connecting through a parallel port is the slowest common transfer method. Early scanners had parallel port connections that could not transfer data faster than 70 kilobytes/second. The primary advantage of the parallel port connection was economic and user skill level: it avoided adding an interface card to the computer.
  • GPIB - General Purpose Interface Bus. Certain drumscanners like the Howtek D4000 featured both a SCSI and GPIB interface. The latter conforms to the IEEE-488 standard, introduced in the mid 1970s. The GPIB interface has only been used by a few scanner manufacturers, mostly serving the DOS/Windows environment. For Apple Macintosh systems, National Instruments provided a NuBus GPIB interface card.
  • Small Computer System Interface (SCSI), rarely used since the early 21st century, supported only by computers with a SCSI interface, either on a card or built-in. During the evolution of the SCSI standard, speeds increased. Widely available and easily set up USB and Firewire largely supplanted SCSI.
  • Universal Serial Bus (USB) scanners can transfer data quickly. The early USB 1.1 standard could transfer data at 1.5 megabytes per second (slower than SCSI), but the later USB 2.0/3.0 standards can transfer at more than 20/60 megabytes per second in practice.
  • FireWire, or IEEE-1394, is an interface of comparable speed to USB 2.0. Possible FireWire speeds are 25, 50, and 100, 400 and 800 megabits per second, but devices may not support all speeds.
  • Proprietary interfaces were used on some early scanners that used a proprietary interface card rather than a standard interface.

Indirect (network) connection to a computer

During the early 1990s professional flatbed scanners were available over a local computer network. This proved useful to publishers, print shops, etc. This functionality largely fell out of use as the cost of flatbed scanners reduced enough to make sharing unnecessary.

From 2000 all-in-one multi-purpose devices became available which were suitable for both small offices and consumers, with printing, scanning, copying, and fax capability in a single apparatus which can be made available to all members of a workgroup.

Battery-powered portable scanners store scans on internal memory; they can later be transferred to a computer either by direct connection, typically USB, or in some cases a memory card may be removed from the scanner and plugged into the computer.

Applications Programming Interface

A paint application such as GIMP or Adobe Photoshop must communicate with the scanner. There are many different scanners, and many of those scanners use different protocols. In order to simplify applications programming, some Applications programming interfaces ("API") were developed. The API presents a uniform interface to the scanner. This means that the application does not need to know the specific details of the scanner in order to access it directly. For example, Adobe Photoshop supports the TWAIN standard; therefore in theory Photoshop can acquire an image from any scanner that has a TWAIN driver.

In practice, there are often problems with an application communicating with a scanner. Either the application or the scanner manufacturer (or both) may have faults in their implementation of the API.

Typically, the API is implemented as a dynamically linked library. Each scanner manufacturer provides software that translates the API procedure calls into primitive commands that are issued to a hardware controller (such as the SCSI, USB, or FireWire controller). The manufacturer's part of the API is commonly called a device driver, but that designation is not strictly accurate: the API does not run in kernel mode and does not directly access the device. Rather the scanner API library translates application requests into hardware requests.

Common scanner software API interfaces:

SANE (Scanner Access Now Easy) is a free/open-source API for accessing scanners. Originally developed for Unix and Linux operating systems, it has been ported to OS/2, Mac OS X, and Microsoft Windows. Unlike TWAIN, SANE does not handle the user interface. This allows batch scans and transparent network access without any special support from the device driver.

TWAIN is used by most scanners. Originally used for low-end and home-use equipment, it is now widely used for large-volume scanning.

ISIS (Image and Scanner Interface Specification) created by Pixel Translations, which still uses SCSI-II for performance reasons, is used by large, departmental-scale, machines.

WIA (Windows Image Acquisition) is an API provided by Microsoft for use on Microsoft Windows.

Bundled applications

Although no software beyond a scanning utility is a feature of any scanner, many scanners come bundled with software. Typically, in addition to the scanning utility, some type of image-editing application (such as Adobe Photoshop), and optical character recognition (OCR) software are supplied. OCR software converts graphical images of text into standard text that can be edited using common word-processing and text-editing software; accuracy is rarely perfect.

Output data

Some scanners, especially those designed for scanning printed documents, only work in black-and-white but most modern scanners work in color. For the latter, the scanned result is a non-compressed RGB image, which can be transferred to a computer's memory. The color output of different scanners is not the same due to the spectral response of their sensing elements, the nature of their light source and the correction applied by the scanning software. While most image sensors have a linear response, the output values are usually gamma compressed. Some scanners compress and clean up the image using embedded firmware. Once on the computer, the image can be processed with a raster graphics program (such as Adobe Photoshop or the GIMP) and saved on a storage device (such as a hard disk).

Images are usually stored on a hard disk. Pictures are normally stored in image formats such as uncompressed Bitmap, "non-lossy" (lossless) compressed TIFF and PNG, and "lossy" compressed JPEG. Documents are best stored in TIFF or PDF format; JPEG is particularly unsuitable for text. Optical character recognition (OCR) software allows a scanned image of text to be converted into editable text with reasonable accuracy, so long as the text is cleanly printed and in a typeface and size that can be read by the software. OCR capability may be integrated into the scanning software, or the scanned image file can be processed with a separate OCR program.

Document processing

What do call a device that converts printed material such as text and pictures into a form the computer can use?

Document scanner

Document imaging requirements differ from those of image scanning. These requirements include scanning speed, automated paper feed, and the ability to automatically scan both the front and the back of a document. On the other hand, image scanning typically requires the ability to handle fragile and or three dimensional objects as well as scan at a much higher resolution.

Document scanners have document feeders, usually larger than those sometimes found on copiers or all-purpose scanners. Scans are made at high speed, from 20 up to 280[16] or 420[17] pages per minute, often in grayscale, although many scanners support color. Many scanners can scan both sides of double-sided originals (duplex operation). Sophisticated document scanners have firmware or software that cleans up scans of text as they are produced, eliminating accidental marks and sharpening type; this would be unacceptable for photographic work, where marks cannot reliably be distinguished from desired fine detail. Files created are compressed as they are made.

The resolution used is usually from 150 to 300 dpi, although the hardware may be capable of 600[17] or higher resolution; this produces images of text good enough to read and for optical character recognition (OCR), without the higher demands on storage space required by higher-resolution images.

What do call a device that converts printed material such as text and pictures into a form the computer can use?

Ministry of Culture, Sports and Tourism of the Republic of Korea issued a interpretation in June 2011 that it is a violation of Copyright Law to scan a book by a third party who is not a copyright holder or a book owner. Therefore, in South Korea, book owners visit 'Scan Room' to scan books by themselves.

Document scans are often processed using OCR technology to create editable and searchable files. Most scanners use ISIS or TWAIN device drivers to scan documents into TIFF format so that the scanned pages can be fed into a document management system that will handle the archiving and retrieval of the scanned pages. Lossy JPEG compression, which is very efficient for pictures, is undesirable for text documents, as slanted straight edges take on a jagged appearance, and solid black (or other color) text on a light background compresses well with lossless compression formats.

While paper feeding and scanning can be done automatically and quickly, preparation and indexing are necessary and require much work by humans. Preparation involves manually inspecting the papers to be scanned and making sure that they are in order, unfolded, without staples or anything else that might jam the scanner. Additionally, some industries such as legal and medical may require documents to have Bates Numbering or some other mark giving a document identification number and date/time of the document scan.

Indexing involves associating relevant keywords to files so that they can be retrieved by content. This process can sometimes be automated to some extent, but it often requires manual labour performed by data-entry clerks. One common practice is the use of barcode-recognition technology: during preparation, barcode sheets with folder names or index information are inserted into the document files, folders, and document groups. Using automatic batch scanning, the documents are saved into appropriate folders, and an index is created for integration into document-management systems.

A specialized form of document scanning is book scanning. Technical difficulties arise from the books usually being bound and sometimes fragile and irreplaceable, but some manufacturers have developed specialized machinery to deal with this. Often special robotic mechanisms are used to automate the page turning and scanning process.

Document camera scanners

What do call a device that converts printed material such as text and pictures into a form the computer can use?

sceyeX document camera.

Another category of document scanner is the document camera. Capturing images on document cameras differs from that of flatbed and Automatic document feeder (ADF) scanners in that there are no moving parts required to scan the object. Conventionally either the illumination/reflector rod inside the scanner must be moved over the document (such as for a flatbed scanner), or the document must be passed over the rod (such as for feeder scanners) in order to produce a scan of a whole image. Document cameras capture the whole document or object in one step, usually instantly. Typically, documents are placed on a flat surface, usually the office desk, underneath the capture area of the document camera. The process of whole-surface-at-once capturing has the benefit of increasing reaction time for the work flow of scanning. After being captured, the images are usually processed through software which may enhance the image and perform such tasks like automatically rotating, cropping and straightening them.[18]

It is not required that the documents or objects being scanned make contact with the document camera, therefore increasing flexibility of the types of documents which are able to be scanned. Objects which have previously been difficult to scan on conventional scanners are now able to be done so with one device. This includes in particular documents which are of varying sizes and shapes, stapled, in folders or bent/crumpled which may get jammed in a feed scanner. Other objects include books, magazines, receipts, letters, tickets etc. No moving parts can also remove the need for maintenance, a consideration in the Total cost of ownership, which includes the continuing operational costs of scanners.

Increased reaction time whilst scanning also has benefits in the realm of context-scanning. ADF scanners, whilst very fast and very good at batch scanning, also require pre- and post- processing of the documents. Document cameras can be integrated directly into a Workflow or process, for example a teller at a bank. The document is scanned directly in the context of the customer, in which it is to be placed or used. Reaction time is an advantage in these situations. Document cameras usually also require a small amount of space and are often portable.[19]

Whilst scanning with document cameras may have a quick reaction time, large amounts of batch scanning of even, unstapled documents is more efficient with an ADF scanner. There are challenges which face this kind of technology regarding external factors (such as lighting) which may have influence on the scan results. The way in which these issues are resolved strongly depends on the sophistication of the product and how it deals with these issues.

Infrared cleaning

Infrared cleaning is a technique used to remove the effects of dust and scratches on images scanned from film; many modern scanners incorporate this feature. It works by scanning the film with infrared light; the dyes in typical color film emulsions are transparent to infrared light, but dust and scratches are not, and block infrared; scanner software can use the visible and infrared information to detect scratches and process the image to greatly reduce their visibility, considering their position, size, shape, and surroundings.

Scanner manufacturers usually have their own name attached to this technique. For example, Epson, Minolta, Nikon, Konica Minolta, Microtek, and others use Digital ICE, while Canon uses its own system FARE (Film Automatic Retouching and Enhancement system).[20] Plustek uses LaserSoft Imaging iSRD. Some independent software developers design infrared cleaning tools.

Other uses

Flatbed scanners have been used as digital backs for large-format cameras to create high-resolution digital images of static subjects.[21] A modified flatbed scanner has been used for documentation and quantification of thin layer chromatograms detected by fluorescence quenching on silica gel layers containing an ultraviolet (UV) indicator.[22] 'ChromImage' is allegedly the first commercial flatbed scanner densitometer. It enables acquisition of TLC plate images and quantification of chromatograms by use of Galaxie-TLC software.[23] Other than being turned into densitometers, flatbed scanners were also turned into colorimeters using different methods.[24] Trichromatic Color Analyser is allegedly the first distributable system using a flatbed scanner as a tristimulus colorimetric device.

See also

  • Barcode reader
  • Book scanning
  • Cintel telecine equipment
  • Display resolution
  • Gamma correction
  • Imaging
  • Telecine

References

  1. ^ Meierhold, N., Spehr, M., Schilling, A., Gumhold, S. and Maas, H.G. (2010). Automatic feature matching between digital images and 2D representations of a 3D laser scanner point cloud, Proceedings of the ISPRS Commission V Mid-Term Symposium Close Range Image Measurement Techniques, Newcastle upon Tyne, UK, 2010, pp. 446–451.
  2. ^ "Scan to a Flash Drive or Memory Card From a PIXMA MP560". support.usa.canon.com. Canon Knowledge Base. Retrieved 22 April 2022.
  3. ^ "NIST Tech Beat - May 24, 2007". nist.gov. Archived from the original on July 28, 2016.
  4. ^ a b Pushkar O.I., (2011), Information systems and technologies. Summary of lectures. /O.I. Pushkar, K.S. Sibilyev. – Kharkiv: Publishing House of KhNUE, p.38
  5. ^ Sachs, J. (2001-02-01). "Scanners and how to use them" (PDF). Digital Light & Color. Retrieved 2015-11-08.
  6. ^ a b Sachs, J. (2001-02-01). "Digital Image Basics" (PDF). Digital Light & Color. Archived from the original (PDF) on 2015-11-20. Retrieved 2015-11-19.
  7. ^ a b Pushkar O.I., (2011), Information systems and technologies. Summary of lectures. /O.I. Pushkar, K.S. Sibilyev. – Kharkiv: Publishing House of KhNUE, p.39
  8. ^ a b c Dubey, N.B. (2009), Office Management: Developing Skills for Smooth Functioning, Global India Publications, 312 pp.
  9. ^ Weitz, A. (2015-11-06). "Film Scanners: A Buying Guide". Explora - B&H Photo Video. Retrieved 2015-11-19.
  10. ^ "Scan Anything and Let Your Phone Do the Rest". MIT Technology Review.
  11. ^ "Epson Perfection V750-M Pro Scanner". epson.com. Archived from the original on 2015-09-24.
  12. ^ a b "Test report film-flatbed-scanner Epson Perfection V750 Pro transparency unit: experiences report; image quality, scanning slides, medium formats". filmscanner.info.
  13. ^ a b c d e "Density Range, Maximum Density, Image Quality Criterion Scanner Explanation, Signification Object Contrast Aperture Stop". filmscanner.info.
  14. ^ "Filmscanner-Rangliste Diascanner-Vergleich: Scanner-Tests mit Leistungsdaten, Vorteile, Nachteile, Technischen Daten". filmscanner.info.
  15. ^ Labs, The Grizzly. "What is the DPI of my scans? - The Grizzly Labs". help.thegrizzlylabs.com. Retrieved 2017-12-08.
  16. ^ "KV-S8147-CV High Volume Production Scanner 140 ppm / 280 ipm with PremierOCR / PremierCOMPRESSION Software Bundle". business.panasonic.com. Retrieved 2017-09-24.
  17. ^ a b Quayle, Mike. "i5850 Scanner information and accessories - Kodak Alaris Information Management". www.alarisworld.com. Retrieved 2017-09-24.
  18. ^ "sceye® - an innovative document scanner for the professional desktop". Kodak. Archived from the original on 18 May 2013. Retrieved 6 March 2013.
  19. ^ "Why should you choose sceye?". SilverCreations Ag. Retrieved 1 March 2013.
  20. ^ "Film Automatic Retouching and Enhancement". Canon. Archived from the original on 2010-10-23. Retrieved 2007-05-02.
  21. ^ [1][2] The Scanner Photography Project
  22. ^ Campbell, A., Chejlava, M.J and Sherma, J. (2003), Use of a Modified Flatbed Scanner for Documentation and Quantification of Thin Layer Chromatograms Detected by Fluorescence Quenching, Journal of Planar Chromatography, 16, 244
  23. ^ "Chromimage". AR2I. 2013-10-20. Retrieved 2015-11-03.
  24. ^ Joyce Farrell, Doron Sherman, Brian W. (1994). How to turn your scanner into a colorimeter, Proc. of IS&T 10th Int. Congress on Adv. in Non-Impact Printing Technol, pp579-581.

What do call a device that converts printed material such as text and pictures into a form the computer can use?

  • Scanner at Curlie
  • Photocopy is an open-source software to apply a photocopier effect to scanned images.
  • "Is Drum Scanning Really Alive and Well?" from Digital Output by Jim Rich
  • "Can a Fine-Art Large-Format Photographer Find Happiness With a $30,000 Scanner?" by Bill Glickman

Retrieved from "https://en.wikipedia.org/w/index.php?title=Image_scanner&oldid=1115262829"


Page 2

Scanning of an object or environment to collect data on its shape

What do call a device that converts printed material such as text and pictures into a form the computer can use?

Making a 3D-model of a Viking belt buckle using a hand held VIUscan 3D laser scanner.

3D scanning is the process of analyzing a real-world object or environment to collect data on its shape and possibly its appearance (e.g. color). The collected data can then be used to construct digital 3D models.

A 3D scanner can be based on many different technologies, each with its own limitations, advantages and costs. Many limitations in the kind of objects that can be digitised are still present. For example, optical technology may encounter many difficulties with dark, shiny, reflective or transparent objects. For example, industrial computed tomography scanning, structured-light 3D scanners, LiDAR and Time Of Flight 3D Scanners can be used to construct digital 3D models, without destructive testing.

Collected 3D data is useful for a wide variety of applications. These devices are used extensively by the entertainment industry in the production of movies and video games, including virtual reality. Other common applications of this technology include augmented reality,[1] motion capture,[2][3] gesture recognition,[4] robotic mapping,[5] industrial design, orthotics and prosthetics,[6] reverse engineering and prototyping, quality control/inspection and the digitization of cultural artifacts.[7]

Functionality

The purpose of a 3D scanner is usually to create a 3D model. This 3D model consists of a polygon mesh or point cloud of geometric samples on the surface of the subject. These points can then be used to extrapolate the shape of the subject (a process called reconstruction). If colour information is collected at each point, then the colours or textures on the surface of the subject can also be determined.

3D scanners share several traits with cameras. Like most cameras, they have a cone-like field of view, and like cameras, they can only collect information about surfaces that are not obscured. While a camera collects colour information about surfaces within its field of view, a 3D scanner collects distance information about surfaces within its field of view. The "picture" produced by a 3D scanner describes the distance to a surface at each point in the picture. This allows the three dimensional position of each point in the picture to be identified.

In some situations, a single scan will not produce a complete model of the subject. Multiple scans, from different directions are usually helpful to obtain information about all sides of the subject. These scans have to be brought into a common reference system, a process that is usually called alignment or registration, and then merged to create a complete 3D model. This whole process, going from the single range map to the whole model, is usually known as the 3D scanning pipeline.[8][9][10][11][12]

Technology

There are a variety of technologies for digitally acquiring the shape of a 3D object. The techniques work with most or all sensor types including optical, acoustic, laser scanning,[13] radar, thermal,[14] and seismic.[15][16] A well established classification[17] divides them into two types: contact and non-contact. Non-contact solutions can be further divided into two main categories, active and passive. There are a variety of technologies that fall under each of these categories.

Contact

What do call a device that converts printed material such as text and pictures into a form the computer can use?

A Coordinate Measuring Machine (CMM) with scanning head.

What do call a device that converts printed material such as text and pictures into a form the computer can use?

3D scanning of a fin whale skeleton in the Natural History Museum of Slovenia (August 2013)

Contact 3D scanners work by physically probing (touching) the part and recording the position of the sensor as the probe moves around the part.

There are two main types of contact 3D scanners:

  • Coordinate measuring machines (CMMs) which traditionally have 3 perpendicular moving axis with a touch probe mounted on the Z axis. As the touch probe moves around the part, sensors on each axis record the position to generate XYZ coordinates. Modern CMMs are 5 axis systems, with the two extra axes provided by pivoting sensor heads. CMMs are the most accurate form of 3D measurement achieving micron precision. The greatest advantage of a CMM after accuracy is that it can be run in autonomous (CNC) mode or as a manual probing system. The disadvantage of CMMs is that their upfront cost and the technical knowledge required to operate them.
  • Articulated Arms which generally have multiple segments with polar sensors on each joint. As per the CMM, as the articulated arm moves around the part sensors record their position and the location of the end of the arm is calculated using complex math and the wrist rotation angle and hinge angle of each joint. While not usually as accurate as CMMs, articulated arms still achieve high accuracy and are cheaper and slightly easier to use. They do not usually have CNC options.

Both modern CMMs and Articulated Arms can also be fitted with non-contact laser scanners instead of touch probes.

Non-contact active

Active scanners emit some kind of radiation or light and detect its reflection or radiation passing through object in order to probe an object or environment. Possible types of emissions used include light, ultrasound or x-ray.

Time-of-flight

What do call a device that converts printed material such as text and pictures into a form the computer can use?

This lidar scanner may be used to scan buildings, rock formations, etc., to produce a 3D model. The lidar can aim its laser beam in a wide range: its head rotates horizontally, a mirror flips vertically. The laser beam is used to measure the distance to the first object on its path.

The time-of-flight 3D laser scanner is an active scanner that uses laser light to probe the subject. At the heart of this type of scanner is a time-of-flight laser range finder. The laser range finder finds the distance of a surface by timing the round-trip time of a pulse of light. A laser is used to emit a pulse of light and the amount of time before the reflected light is seen by a detector is measured. Since the speed of light c {\displaystyle c}

What do call a device that converts printed material such as text and pictures into a form the computer can use?
is known, the round-trip time determines the travel distance of the light, which is twice the distance between the scanner and the surface. If t {\displaystyle t}
What do call a device that converts printed material such as text and pictures into a form the computer can use?
is the round-trip time, then distance is equal to c ⋅ t / 2 {\displaystyle \textstyle c\!\cdot \!t/2}
What do call a device that converts printed material such as text and pictures into a form the computer can use?
. The accuracy of a time-of-flight 3D laser scanner depends on how precisely we can measure the t {\displaystyle t} time: 3.3 picoseconds (approx.) is the time taken for light to travel 1 millimetre.

The laser range finder only detects the distance of one point in its direction of view. Thus, the scanner scans its entire field of view one point at a time by changing the range finder's direction of view to scan different points. The view direction of the laser range finder can be changed either by rotating the range finder itself, or by using a system of rotating mirrors. The latter method is commonly used because mirrors are much lighter and can thus be rotated much faster and with greater accuracy. Typical time-of-flight 3D laser scanners can measure the distance of 10,000~100,000 points every second.

Time-of-flight devices are also available in a 2D configuration. This is referred to as a time-of-flight camera.[18]

Triangulation

What do call a device that converts printed material such as text and pictures into a form the computer can use?

Principle of a laser triangulation sensor. Two object positions are shown.

Triangulation based 3D laser scanners are also active scanners that use laser light to probe the environment. With respect to time-of-flight 3D laser scanner the triangulation laser shines a laser on the subject and exploits a camera to look for the location of the laser dot. Depending on how far away the laser strikes a surface, the laser dot appears at different places in the camera's field of view. This technique is called triangulation because the laser dot, the camera and the laser emitter form a triangle. The length of one side of the triangle, the distance between the camera and the laser emitter is known. The angle of the laser emitter corner is also known. The angle of the camera corner can be determined by looking at the location of the laser dot in the camera's field of view. These three pieces of information fully determine the shape and size of the triangle and give the location of the laser dot corner of the triangle.[19] In most cases a laser stripe, instead of a single laser dot, is swept across the object to speed up the acquisition process. The National Research Council of Canada was among the first institutes to develop the triangulation based laser scanning technology in 1978.[20]

Strengths and weaknesses

Time-of-flight and triangulation range finders each have strengths and weaknesses that make them suitable for different situations. The advantage of time-of-flight range finders is that they are capable of operating over very long distances, on the order of kilometres. These scanners are thus suitable for scanning large structures like buildings or geographic features. The disadvantage of time-of-flight range finders is their accuracy. Due to the high speed of light, timing the round-trip time is difficult and the accuracy of the distance measurement is relatively low, on the order of millimetres.

Triangulation range finders are exactly the opposite. They have a limited range of some meters, but their accuracy is relatively high. The accuracy of triangulation range finders is on the order of tens of micrometers.

Time-of-flight scanners' accuracy can be lost when the laser hits the edge of an object because the information that is sent back to the scanner is from two different locations for one laser pulse. The coordinate relative to the scanner's position for a point that has hit the edge of an object will be calculated based on an average and therefore will put the point in the wrong place. When using a high resolution scan on an object the chances of the beam hitting an edge are increased and the resulting data will show noise just behind the edges of the object. Scanners with a smaller beam width will help to solve this problem but will be limited by range as the beam width will increase over distance. Software can also help by determining that the first object to be hit by the laser beam should cancel out the second.

At a rate of 10,000 sample points per second, low resolution scans can take less than a second, but high resolution scans, requiring millions of samples, can take minutes for some time-of-flight scanners. The problem this creates is distortion from motion. Since each point is sampled at a different time, any motion in the subject or the scanner will distort the collected data. Thus, it is usually necessary to mount both the subject and the scanner on stable platforms and minimise vibration. Using these scanners to scan objects in motion is very difficult.

Recently, there has been research on compensating for distortion from small amounts of vibration[21] and distortions due to motion and/or rotation.[22]

Short-range laser scanners can't usually encompass a depth of field more than 1 meter.[23] When scanning in one position for any length of time slight movement can occur in the scanner position due to changes in temperature. If the scanner is set on a tripod and there is strong sunlight on one side of the scanner then that side of the tripod will expand and slowly distort the scan data from one side to another. Some laser scanners have level compensators built into them to counteract any movement of the scanner during the scan process.

Conoscopic holography

In a conoscopic system, a laser beam is projected onto the surface and then the immediate reflection along the same ray-path are put through a conoscopic crystal and projected onto a CCD. The result is a diffraction pattern, that can be frequency analyzed to determine the distance to the measured surface. The main advantage with conoscopic holography is that only a single ray-path is needed for measuring, thus giving an opportunity to measure for instance the depth of a finely drilled hole.[24]

Hand-held laser scanners

Hand-held laser scanners create a 3D image through the triangulation mechanism described above: a laser dot or line is projected onto an object from a hand-held device and a sensor (typically a charge-coupled device or position sensitive device) measures the distance to the surface. Data is collected in relation to an internal coordinate system and therefore to collect data where the scanner is in motion the position of the scanner must be determined. The position can be determined by the scanner using reference features on the surface being scanned (typically adhesive reflective tabs, but natural features have been also used in research work)[25][26] or by using an external tracking method. External tracking often takes the form of a laser tracker (to provide the sensor position) with integrated camera (to determine the orientation of the scanner) or a photogrammetric solution using 3 or more cameras providing the complete six degrees of freedom of the scanner. Both techniques tend to use infrared light-emitting diodes attached to the scanner which are seen by the camera(s) through filters providing resilience to ambient lighting.[27]

Data is collected by a computer and recorded as data points within three-dimensional space, with processing this can be converted into a triangulated mesh and then a computer-aided design model, often as non-uniform rational B-spline surfaces. Hand-held laser scanners can combine this data with passive, visible-light sensors — which capture surface textures and colors — to build (or "reverse engineer") a full 3D model.

Structured light

Structured-light 3D scanners project a pattern of light on the subject and look at the deformation of the pattern on the subject. The pattern is projected onto the subject using either an LCD projector or other stable light source. A camera, offset slightly from the pattern projector, looks at the shape of the pattern and calculates the distance of every point in the field of view.

Structured-light scanning is still a very active area of research with many research papers published each year. Perfect maps have also been proven useful as structured light patterns that solve the correspondence problem and allow for error detection and error correction.[24] [See Morano, R., et al. "Structured Light Using Pseudorandom Codes," IEEE Transactions on Pattern Analysis and Machine Intelligence.

The advantage of structured-light 3D scanners is speed and precision. Instead of scanning one point at a time, structured light scanners scan multiple points or the entire field of view at once. Scanning an entire field of view in a fraction of a second reduces or eliminates the problem of distortion from motion. Some existing systems are capable of scanning moving objects in real-time.

A real-time scanner using digital fringe projection and phase-shifting technique (certain kinds of structured light methods) was developed, to capture, reconstruct, and render high-density details of dynamically deformable objects (such as facial expressions) at 40 frames per second.[28] Recently, another scanner has been developed. Different patterns can be applied to this system, and the frame rate for capturing and data processing achieves 120 frames per second. It can also scan isolated surfaces, for example two moving hands.[29] By utilising the binary defocusing technique, speed breakthroughs have been made that could reach hundreds [30] to thousands of frames per second.[31]

Modulated light

Modulated light 3D scanners shine a continually changing light at the subject. Usually the light source simply cycles its amplitude in a sinusoidal pattern. A camera detects the reflected light and the amount the pattern is shifted by determines the distance the light travelled. Modulated light also allows the scanner to ignore light from sources other than a laser, so there is no interference.

Volumetric techniques

Medical

Computed tomography (CT) is a medical imaging method which generates a three-dimensional image of the inside of an object from a large series of two-dimensional X-ray images, similarly magnetic resonance imaging is another medical imaging technique that provides much greater contrast between the different soft tissues of the body than computed tomography (CT) does, making it especially useful in neurological (brain), musculoskeletal, cardiovascular, and oncological (cancer) imaging. These techniques produce a discrete 3D volumetric representation that can be directly visualised, manipulated or converted to traditional 3D surface by mean of isosurface extraction algorithms.

Industrial

Although most common in medicine, industrial computed tomography, microtomography and MRI are also used in other fields for acquiring a digital representation of an object and its interior, such as non destructive materials testing, reverse engineering, or studying biological and paleontological specimens.

Non-contact passive

Passive 3D imaging solutions do not emit any kind of radiation themselves, but instead rely on detecting reflected ambient radiation. Most solutions of this type detect visible light because it is a readily available ambient radiation. Other types of radiation, such as infrared could also be used. Passive methods can be very cheap, because in most cases they do not need particular hardware but simple digital cameras.

  • Stereoscopic systems usually employ two video cameras, slightly apart, looking at the same scene. By analysing the slight differences between the images seen by each camera, it is possible to determine the distance at each point in the images. This method is based on the same principles driving human stereoscopic vision[1].
  • Photometric systems usually use a single camera, but take multiple images under varying lighting conditions. These techniques attempt to invert the image formation model in order to recover the surface orientation at each pixel.
  • Silhouette techniques use outlines created from a sequence of photographs around a three-dimensional object against a well contrasted background. These silhouettes are extruded and intersected to form the visual hull approximation of the object. With these approaches some concavities of an object (like the interior of a bowl) cannot be detected.

Photogrammetric non-contact passive methods

What do call a device that converts printed material such as text and pictures into a form the computer can use?

Images taken from multiple perspectives such as a fixed camera array can be taken of a subject for a photogrammetric reconstruction pipeline to generate a 3D mesh or point cloud.

Photogrammetry provides reliable information about 3D shapes of physical objects based on analysis of photographic images. The resulting 3D data is typically provided as a 3D point cloud, 3D mesh or 3D points.[32] Modern photogrammetry software applications automatically analyze a large number of digital images for 3D reconstruction, however manual interaction may be required if the software cannot automatically determine the 3D positions of the camera in the images which is an essential step in the reconstruction pipeline. Various software packages are available including PhotoModeler, Geodetic Systems, Autodesk ReCap, RealityCapture and Agisoft Metashape (see comparison of photogrammetry software).

  • Close range photogrammetry typically uses a handheld camera such as a DSLR with a fixed focal length lens to capture images of objects for 3D reconstruction.[33] Subjects include smaller objects such as a building facade, vehicles, sculptures, rocks, and shoes.
  • Camera Arrays can be used to generate 3D point clouds or meshes of live objects such as people or pets by synchronizing multiple cameras to photograph a subject from multiple perspectives at the same time for 3D object reconstruction.[34]
  • Wide angle photogrammetry can be used to capture the interior of buildings or enclosed spaces using a wide angle lens camera such as a 360 camera.
  • Aerial photogrammetry uses aerial images acquired by satellite, commercial aircraft or UAV drone to collect images of buildings, structures and terrain for 3D reconstruction into a point cloud or mesh.

Acquisition from acquired sensor data

Semi-automatic building extraction from lidar data and high-resolution images is also a possibility. Again, this approach allows modelling without physically moving towards the location or object.[35] From airborne lidar data, digital surface model (DSM) can be generated and then the objects higher than the ground are automatically detected from the DSM. Based on general knowledge about buildings, geometric characteristics such as size, height and shape information are then used to separate the buildings from other objects. The extracted building outlines are then simplified using an orthogonal algorithm to obtain better cartographic quality. Watershed analysis can be conducted to extract the ridgelines of building roofs. The ridgelines as well as slope information are used to classify the buildings per type. The buildings are then reconstructed using three parametric building models (flat, gabled, hipped).[36]

Acquisition from on-site sensors

Lidar and other terrestrial laser scanning technology[37] offers the fastest, automated way to collect height or distance information. lidar or laser for height measurement of buildings is becoming very promising.[38] Commercial applications of both airborne lidar and ground laser scanning technology have proven to be fast and accurate methods for building height extraction. The building extraction task is needed to determine building locations, ground elevation, orientations, building size, rooftop heights, etc. Most buildings are described to sufficient details in terms of general polyhedra, i.e., their boundaries can be represented by a set of planar surfaces and straight lines. Further processing such as expressing building footprints as polygons is used for data storing in GIS databases.

Using laser scans and images taken from ground level and a bird's-eye perspective, Fruh and Zakhor present an approach to automatically create textured 3D city models. This approach involves registering and merging the detailed facade models with a complementary airborne model. The airborne modeling process generates a half-meter resolution model with a bird's-eye view of the entire area, containing terrain profile and building tops. Ground-based modeling process results in a detailed model of the building facades. Using the DSM obtained from airborne laser scans, they localize the acquisition vehicle and register the ground-based facades to the airborne model by means of Monte Carlo localization (MCL). Finally, the two models are merged with different resolutions to obtain a 3D model.

Using an airborne laser altimeter, Haala, Brenner and Anders combined height data with the existing ground plans of buildings. The ground plans of buildings had already been acquired either in analog form by maps and plans or digitally in a 2D GIS. The project was done in order to enable an automatic data capture by the integration of these different types of information. Afterwards virtual reality city models are generated in the project by texture processing, e.g. by mapping of terrestrial images. The project demonstrated the feasibility of rapid acquisition of 3D urban GIS. Ground plans proved are another very important source of information for 3D building reconstruction. Compared to results of automatic procedures, these ground plans proved more reliable since they contain aggregated information which has been made explicit by human interpretation. For this reason, ground plans, can considerably reduce costs in a reconstruction project. An example of existing ground plan data usable in building reconstruction is the Digital Cadastral map, which provides information on the distribution of property, including the borders of all agricultural areas and the ground plans of existing buildings. Additionally information as street names and the usage of buildings (e.g. garage, residential building, office block, industrial building, church) is provided in the form of text symbols. At the moment the Digital Cadastral map is built up as a database covering an area, mainly composed by digitizing preexisting maps or plans.

Cost

  • Terrestrial laser scan devices (pulse or phase devices) + processing software generally start at a price of €150,000. Some less precise devices (as the Trimble VX) cost around €75,000.
  • Terrestrial lidar systems cost around €300,000.
  • Systems using regular still cameras mounted on RC helicopters (Photogrammetry) are also possible, and cost around €25,000. Systems that use still cameras with balloons are even cheaper (around €2,500), but require additional manual processing. As the manual processing takes around 1 month of labor for every day of taking pictures, this is still an expensive solution in the long run.
  • Obtaining satellite images is also an expensive endeavor. High resolution stereo images (0.5 m resolution) cost around €11,000. Image satellites include Quikbird, Ikonos. High resolution monoscopic images cost around €5,500. Somewhat lower resolution images (e.g. from the CORONA satellite; with a 2 m resolution) cost around €1,000 per 2 images. Note that Google Earth images are too low in resolution to make an accurate 3D model.[39]

Reconstruction

From point clouds

The point clouds produced by 3D scanners and 3D imaging can be used directly for measurement and visualisation in the architecture and construction world.

From models

Most applications, however, use instead polygonal 3D models, NURBS surface models, or editable feature-based CAD models (aka solid models).

  • Polygon mesh models: In a polygonal representation of a shape, a curved surface is modeled as many small faceted flat surfaces (think of a sphere modeled as a disco ball). Polygon models—also called Mesh models, are useful for visualisation, for some CAM (i.e., machining), but are generally "heavy" ( i.e., very large data sets), and are relatively un-editable in this form. Reconstruction to polygonal model involves finding and connecting adjacent points with straight lines in order to create a continuous surface. Many applications, both free and nonfree, are available for this purpose (e.g. GigaMesh, MeshLab, PointCab, kubit PointCloud for AutoCAD, Reconstructor, imagemodel, PolyWorks, Rapidform, Geomagic, Imageware, Rhino 3D etc.).
  • Surface models: The next level of sophistication in modeling involves using a quilt of curved surface patches to model the shape. These might be NURBS, TSplines or other curved representations of curved topology. Using NURBS, the spherical shape becomes a true mathematical sphere. Some applications offer patch layout by hand but the best in class offer both automated patch layout and manual layout. These patches have the advantage of being lighter and more manipulable when exported to CAD. Surface models are somewhat editable, but only in a sculptural sense of pushing and pulling to deform the surface. This representation lends itself well to modelling organic and artistic shapes. Providers of surface modellers include Rapidform, Geomagic, Rhino 3D, Maya, T Splines etc.
  • Solid CAD models: From an engineering/manufacturing perspective, the ultimate representation of a digitised shape is the editable, parametric CAD model. In CAD, the sphere is described by parametric features which are easily edited by changing a value (e.g., centre point and radius).

These CAD models describe not simply the envelope or shape of the object, but CAD models also embody the "design intent" (i.e., critical features and their relationship to other features). An example of design intent not evident in the shape alone might be a brake drum's lug bolts, which must be concentric with the hole in the centre of the drum. This knowledge would drive the sequence and method of creating the CAD model; a designer with an awareness of this relationship would not design the lug bolts referenced to the outside diameter, but instead, to the center. A modeler creating a CAD model will want to include both Shape and design intent in the complete CAD model.

Vendors offer different approaches to getting to the parametric CAD model. Some export the NURBS surfaces and leave it to the CAD designer to complete the model in CAD (e.g., Geomagic, Imageware, Rhino 3D). Others use the scan data to create an editable and verifiable feature based model that is imported into CAD with full feature tree intact, yielding a complete, native CAD model, capturing both shape and design intent (e.g. Geomagic, Rapidform). For instance, the market offers various plug-ins for established CAD-programs, such as SolidWorks. Xtract3D, DezignWorks and Geomagic for SolidWorks allow manipulating a 3D scan directly inside SolidWorks. Still other CAD applications are robust enough to manipulate limited points or polygon models within the CAD environment (e.g., CATIA, AutoCAD, Revit).

From a set of 2D slices

What do call a device that converts printed material such as text and pictures into a form the computer can use?

3D reconstruction of the brain and eyeballs from CT scanned DICOM images. In this image, areas with the density of bone or air were made transparent, and the slices stacked up in an approximate free-space alignment. The outer ring of material around the brain are the soft tissues of skin and muscle on the outside of the skull. A black box encloses the slices to provide the black background. Since these are simply 2D images stacked up, when viewed on edge the slices disappear since they have effectively zero thickness. Each DICOM scan represents about 5 mm of material averaged into a thin slice.

CT, industrial CT, MRI, or micro-CT scanners do not produce point clouds but a set of 2D slices (each termed a "tomogram") which are then 'stacked together' to produce a 3D representation. There are several ways to do this depending on the output required:

  • Volume rendering: Different parts of an object usually have different threshold values or greyscale densities. From this, a 3-dimensional model can be constructed and displayed on screen. Multiple models can be constructed from various thresholds, allowing different colours to represent each component of the object. Volume rendering is usually only used for visualisation of the scanned object.
  • Image segmentation: Where different structures have similar threshold/greyscale values, it can become impossible to separate them simply by adjusting volume rendering parameters. The solution is called segmentation, a manual or automatic procedure that can remove the unwanted structures from the image. Image segmentation software usually allows export of the segmented structures in CAD or STL format for further manipulation.
  • Image-based meshing: When using 3D image data for computational analysis (e.g. CFD and FEA), simply segmenting the data and meshing from CAD can become time-consuming, and virtually intractable for the complex topologies typical of image data. The solution is called image-based meshing, an automated process of generating an accurate and realistic geometrical description of the scan data.

From laser scans

Laser scanning describes the general method to sample or scan a surface using laser technology. Several areas of application exist that mainly differ in the power of the lasers that are used, and in the results of the scanning process. Low laser power is used when the scanned surface doesn't have to be influenced, e.g. when it only has to be digitised. Confocal or 3D laser scanning are methods to get information about the scanned surface. Another low-power application uses structured light projection systems for solar cell flatness metrology,[40] enabling stress calculation throughout in excess of 2000 wafers per hour.[41]

The laser power used for laser scanning equipment in industrial applications is typically less than 1W. The power level is usually on the order of 200 mW or less but sometimes more.

From photographs

3D data acquisition and object reconstruction can be performed using stereo image pairs. Stereo photogrammetry or photogrammetry based on a block of overlapped images is the primary approach for 3D mapping and object reconstruction using 2D images. Close-range photogrammetry has also matured to the level where cameras or digital cameras can be used to capture the close-look images of objects, e.g., buildings, and reconstruct them using the very same theory as the aerial photogrammetry. An example of software which could do this is Vexcel FotoG 5.[42][43] This software has now been replaced by Vexcel GeoSynth.[44] Another similar software program is Microsoft Photosynth.[45][46]

A semi-automatic method for acquiring 3D topologically structured data from 2D aerial stereo images has been presented by Sisi Zlatanova.[47] The process involves the manual digitizing of a number of points necessary for automatically reconstructing the 3D objects. Each reconstructed object is validated by superimposition of its wire frame graphics in the stereo model. The topologically structured 3D data is stored in a database and are also used for visualization of the objects. Notable software used for 3D data acquisition using 2D images include e.g. Agisoft Metashape,[48] RealityCapture,[49] and ENSAIS Engineering College TIPHON (Traitement d'Image et PHOtogrammétrie Numérique).[50]

A method for semi-automatic building extraction together with a concept for storing building models alongside terrain and other topographic data in a topographical information system has been developed by Franz Rottensteiner. His approach was based on the integration of building parameter estimations into the photogrammetry process applying a hybrid modeling scheme. Buildings are decomposed into a set of simple primitives that are reconstructed individually and are then combined by Boolean operators. The internal data structure of both the primitives and the compound building models are based on the boundary representation methods[51][52]

Multiple images are used in Zeng's approach to surface reconstruction from multiple images. A central idea is to explore the integration of both 3D stereo data and 2D calibrated images. This approach is motivated by the fact that only robust and accurate feature points that survived the geometry scrutiny of multiple images are reconstructed in space. The density insufficiency and the inevitable holes in the stereo data should then be filled in by using information from multiple images. The idea is thus to first construct small surface patches from stereo points, then to progressively propagate only reliable patches in their neighborhood from images into the whole surface using a best-first strategy. The problem thus reduces to searching for an optimal local surface patch going through a given set of stereo points from images.

Multi-spectral images are also used for 3D building detection. The first and last pulse data and the normalized difference vegetation index are used in the process.[53]

New measurement techniques are also employed to obtain measurements of and between objects from single images by using the projection, or the shadow as well as their combination. This technology is gaining attention given its fast processing time, and far lower cost than stereo measurements.[citation needed]

Applications

Space experiments

3D scanning technology has been used to scan space rocks for the European Space Agency.[54][55]

Construction industry and civil engineering

  • Robotic control: e.g. a laser scanner may function as the "eye" of a robot.[56][57]
  • As-built drawings of bridges, industrial plants, and monuments
  • Documentation of historical sites[58]
  • Site modelling and lay outing
  • Quality control
  • Quantity surveys
  • Payload monitoring [59]
  • Freeway redesign
  • Establishing a bench mark of pre-existing shape/state in order to detect structural changes resulting from exposure to extreme loadings such as earthquake, vessel/truck impact or fire.
  • Create GIS (geographic information system) maps[60] and geomatics.
  • Subsurface laser scanning in mines and karst voids.[61]
  • Forensic documentation[62]

Design process

  • Increasing accuracy working with complex parts and shapes,
  • Coordinating product design using parts from multiple sources,
  • Updating old CD scans with those from more current technology,
  • Replacing missing or older parts,
  • Creating cost savings by allowing as-built design services, for example in automotive manufacturing plants,
  • "Bringing the plant to the engineers" with web shared scans, and
  • Saving travel costs.

Entertainment

3D scanners are used by the entertainment industry to create digital 3D models for movies, video games and leisure purposes.[63] They are heavily utilized in virtual cinematography. In cases where a real-world equivalent of a model exists, it is much faster to scan the real-world object than to manually create a model using 3D modeling software. Frequently, artists sculpt physical models of what they want and scan them into digital form rather than directly creating digital models on a computer.

3D photography

What do call a device that converts printed material such as text and pictures into a form the computer can use?

3D selfie in 1:20 scale printed by Shapeways using gypsum-based printing, created by Madurodam miniature park from 2D pictures taken at its Fantasitron photo booth.

What do call a device that converts printed material such as text and pictures into a form the computer can use?

Fantasitron 3D photo booth at Madurodam

3D scanners are evolving for the use of cameras to represent 3D objects in an accurate manner.[64] Companies are emerging since 2010 that create 3D portraits of people (3D figurines or 3D selfie).

An augmented reality menu for the Madrid restaurant chain 80 Degrees[65]

Law enforcement

3D laser scanning is used by the law enforcement agencies around the world. 3D models are used for on-site documentation of:[66]

  • Crime scenes
  • Bullet trajectories
  • Bloodstain pattern analysis
  • Accident reconstruction
  • Bombings
  • Plane crashes, and more

Reverse engineering

Reverse engineering of a mechanical component requires a precise digital model of the objects to be reproduced. Rather than a set of points a precise digital model can be represented by a polygon mesh, a set of flat or curved NURBS surfaces, or ideally for mechanical components, a CAD solid model. A 3D scanner can be used to digitise free-form or gradually changing shaped components as well as prismatic geometries whereas a coordinate measuring machine is usually used only to determine simple dimensions of a highly prismatic model. These data points are then processed to create a usable digital model, usually using specialized reverse engineering software.

Real estate

Land or buildings can be scanned into a 3D model, which allows buyers to tour and inspect the property remotely, anywhere, without having to be present at the property.[67] There is already at least one company providing 3D-scanned virtual real estate tours.[68] A typical virtual tour Archived 2017-04-27 at the Wayback Machine would consist of dollhouse view,[69] inside view, as well as a floor plan.

Virtual/remote tourism

The environment at a place of interest can be captured and converted into a 3D model. This model can then be explored by the public, either through a VR interface or a traditional "2D" interface. This allows the user to explore locations which are inconvenient for travel.[70] A group of history students at Vancouver iTech Preparatory Middle School created a Virtual Museum by 3D Scanning more than 100 artifacts.[71]

Cultural heritage

There have been many research projects undertaken via the scanning of historical sites and artifacts both for documentation and analysis purposes.[72]

The combined use of 3D scanning and 3D printing technologies allows the replication of real objects without the use of traditional plaster casting techniques, that in many cases can be too invasive for being performed on precious or delicate cultural heritage artifacts.[73] In an example of a typical application scenario, a gargoyle model was digitally acquired using a 3D scanner and the produced 3D data was processed using MeshLab. The resulting digital 3D model was fed to a rapid prototyping machine to create a real resin replica of the original object.

Creation of 3D models for Museums and Archaeological artifacts[74][75][76]

Michelangelo

In 1999, two different research groups started scanning Michelangelo's statues. Stanford University with a group led by Marc Levoy[77] used a custom laser triangulation scanner built by Cyberware to scan Michelangelo's statues in Florence, notably the David, the Prigioni and the four statues in The Medici Chapel. The scans produced a data point density of one sample per 0.25 mm, detailed enough to see Michelangelo's chisel marks. These detailed scans produced a large amount of data (up to 32 gigabytes) and processing the data from his scans took 5 months. Approximately in the same period a research group from IBM, led by H. Rushmeier and F. Bernardini scanned the Pietà of Florence acquiring both geometric and colour details. The digital model, result of the Stanford scanning campaign, was thoroughly used in the 2004 subsequent restoration of the statue.[78]

Monticello

In 2002, David Luebke, et al. scanned Thomas Jefferson's Monticello.[79] A commercial time of flight laser scanner, the DeltaSphere 3000, was used. The scanner data was later combined with colour data from digital photographs to create the Virtual Monticello, and the Jefferson's Cabinet exhibits in the New Orleans Museum of Art in 2003. The Virtual Monticello exhibit simulated a window looking into Jefferson's Library. The exhibit consisted of a rear projection display on a wall and a pair of stereo glasses for the viewer. The glasses, combined with polarised projectors, provided a 3D effect. Position tracking hardware on the glasses allowed the display to adapt as the viewer moves around, creating the illusion that the display is actually a hole in the wall looking into Jefferson's Library. The Jefferson's Cabinet exhibit was a barrier stereogram (essentially a non-active hologram that appears different from different angles) of Jefferson's Cabinet.

Cuneiform tablets

The first 3D models of cuneiform tablets were acquired in Germany in 2000.[80] In 2003 the so-called Digital Hammurabi project acquired cuneiform tablets with a laser triangulation scanner using a regular grid pattern having a resolution of 0.025 mm (0.00098 in).[81] With the use of high-resolution 3D-scanners by the Heidelberg University for tablet acquisition in 2009 the development of the GigaMesh Software Framework began to visualize and extract cuneiform characters from 3D-models.[82] It was used to process ca. 2.000 3D-digitized tablets of the Hilprecht Collection in Jena to create an Open Access benchmark dataset[83] and an annotated collection[84] of 3D-models of tablets freely available under CC BY licenses.[85]

Kasubi Tombs

A 2009 CyArk 3D scanning project at Uganda's historic Kasubi Tombs, a UNESCO World Heritage Site, using a Leica HDS 4500, produced detailed architectural models of Muzibu Azaala Mpanga, the main building at the complex and tomb of the Kabakas (Kings) of Uganda. A fire on March 16, 2010, burned down much of the Muzibu Azaala Mpanga structure, and reconstruction work is likely to lean heavily upon the dataset produced by the 3D scan mission.[86]

"Plastico di Roma antica"

In 2005, Gabriele Guidi, et al. scanned the "Plastico di Roma antica",[87] a model of Rome created in the last century. Neither the triangulation method, nor the time of flight method satisfied the requirements of this project because the item to be scanned was both large and contained small details. They found though, that a modulated light scanner was able to provide both the ability to scan an object the size of the model and the accuracy that was needed. The modulated light scanner was supplemented by a triangulation scanner which was used to scan some parts of the model.

Other projects

The 3D Encounters Project at the Petrie Museum of Egyptian Archaeology aims to use 3D laser scanning to create a high quality 3D image library of artefacts and enable digital travelling exhibitions of fragile Egyptian artefacts, English Heritage has investigated the use of 3D laser scanning for a wide range of applications to gain archaeological and condition data, and the National Conservation Centre in Liverpool has also produced 3D laser scans on commission, including portable object and in situ scans of archaeological sites.[88] The Smithsonian Institution has a project called Smithsonian X 3D notable for the breadth of types of 3D objects they are attempting to scan. These include small objects such as insects and flowers, to human sized objects such as Amelia Earhart's Flight Suit to room sized objects such as the Gunboat Philadelphia to historic sites such as Liang Bua in Indonesia. Also of note the data from these scans is being made available to the public for free and downloadable in several data formats.

Medical CAD/CAM

3D scanners are used to capture the 3D shape of a patient in orthotics and dentistry. It gradually supplants tedious plaster cast. CAD/CAM software are then used to design and manufacture the orthosis, prosthesis or dental implants.

Many Chairside dental CAD/CAM systems and Dental Laboratory CAD/CAM systems use 3D Scanner technologies to capture the 3D surface of a dental preparation (either in vivo or in vitro), in order to produce a restoration digitally using CAD software and ultimately produce the final restoration using a CAM technology (such as a CNC milling machine, or 3D printer). The chairside systems are designed to facilitate the 3D scanning of a preparation in vivo and produce the restoration (such as a Crown, Onlay, Inlay or Veneer).

Creation of 3D models for Anatomy and Biology education[89][90] and cadaver models for educational neurosurgical simulations.[91]

Quality assurance and industrial metrology

The digitalisation of real-world objects is of vital importance in various application domains. This method is especially applied in industrial quality assurance to measure the geometric dimension accuracy. Industrial processes such as assembly are complex, highly automated and typically based on CAD (computer-aided design) data. The problem is that the same degree of automation is also required for quality assurance. It is, for example, a very complex task to assemble a modern car, since it consists of many parts that must fit together at the very end of the production line. The optimal performance of this process is guaranteed by quality assurance systems. Especially the geometry of the metal parts must be checked in order to assure that they have the correct dimensions, fit together and finally work reliably.

Within highly automated processes, the resulting geometric measures are transferred to machines that manufacture the desired objects. Due to mechanical uncertainties and abrasions, the result may differ from its digital nominal. In order to automatically capture and evaluate these deviations, the manufactured part must be digitised as well. For this purpose, 3D scanners are applied to generate point samples from the object's surface which are finally compared against the nominal data.[92]

The process of comparing 3D data against a CAD model is referred to as CAD-Compare, and can be a useful technique for applications such as determining wear patterns on moulds and tooling, determining accuracy of final build, analysing gap and flush, or analysing highly complex sculpted surfaces. At present, laser triangulation scanners, structured light and contact scanning are the predominant technologies employed for industrial purposes, with contact scanning remaining the slowest, but overall most accurate option. Nevertheless, 3D scanning technology offers distinct advantages compared to traditional touch probe measurements. White-light or laser scanners accurately digitize objects all around, capturing fine details and freeform surfaces without reference points or spray. The entire surface is covered at record speed without the risk of damaging the part. Graphic comparison charts illustrate geometric deviations of full object level, providing deeper insights into potential causes.[93] [94]

Circumvention of shipping costs and international import/export tariffs

3D scanning can be used in conjunction with 3D printing technology to virtually teleport certain object across distances without the need of shipping them and in some cases incurring import/export tariffs. For example, a plastic object can be 3D-scanned in the United States, the files can be sent off to a 3D-printing facility over in Germany where the object is replicated, effectively teleporting the object across the globe. In the future, as 3D scanning and 3D printing technologies become more and more prevalent, governments around the world will need to reconsider and rewrite trade agreements and international laws.

Object reconstruction

After the data has been collected, the acquired (and sometimes already processed) data from images or sensors needs to be reconstructed. This may be done in the same program or in some cases, the 3D data needs to be exported and imported into another program for further refining, and/or to add additional data. Such additional data could be gps-location data, ... Also, after the reconstruction, the data might be directly implemented into a local (GIS) map[95][96] or a worldwide map such as Google Earth.

Software

Several software packages are used in which the acquired (and sometimes already processed) data from images or sensors is imported. Notable software packages include:[97]

  • Qlone
  • 3DF Zephyr
  • Canoma
  • Leica Photogrammetry Suite
  • MeshLab
  • MountainsMap SEM (microscopy applications only)
  • PhotoModeler
  • SketchUp
  • tomviz

See also

  • 3D computer graphics software
  • 3D printing
  • 3D reconstruction
  • 3D selfie
  • Angle-sensitive pixel
  • Depth map
  • Digitization
  • Epipolar geometry
  • Full body scanner
  • Image reconstruction
  • Light-field camera
  • Photogrammetry
  • Range imaging
  • Remote sensing
  • Structured-light 3D scanner
  • Thingiverse

References

  1. ^ Izadi, Shahram, et al. "KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera." Proceedings of the 24th annual ACM symposium on User interface software and technology. ACM, 2011.
  2. ^ Moeslund, Thomas B., and Erik Granum. "A survey of computer vision-based human motion capture." Computer vision and image understanding 81.3 (2001): 231-268.
  3. ^ Wand, Michael et al. "Efficient reconstruction of nonrigid shape and motion from real-time 3D scanner data." ACM Trans. Graph. 28 (2009): 15:1-15:15.
  4. ^ Biswas, Kanad K., and Saurav Kumar Basu. "Gesture recognition using Microsoft kinect®." Automation, Robotics and Applications (ICARA), 2011 5th International Conference on. IEEE, 2011.
  5. ^ Kim, Pileun, Jingdao Chen, and Yong K. Cho. "SLAM-driven robotic mapping and registration of 3D point clouds." Automation in Construction 89 (2018): 38-48.
  6. ^ Scott, Clare (2018-04-19). "3D Scanning and 3D Printing Allow for Production of Lifelike Facial Prosthetics". 3DPrint.com.
  7. ^ O'Neal, Bridget (2015-02-19). "CyArk 500 Challenge Gains Momentum in Preserving Cultural Heritage with Artec 3D Scanning Technology". 3DPrint.com.
  8. ^ Fausto Bernardini, Holly E. Rushmeier (2002). "The 3D Model Acquisition Pipeline" (PDF). Computer Graphics Forum. 21 (2): 149–172. doi:10.1111/1467-8659.00574. S2CID 15779281.
  9. ^ "Matter and Form - 3D Scanning Hardware & Software". matterandform.net. Retrieved 2020-04-01.
  10. ^ OR3D. "What is 3D Scanning? - Scanning Basics and Devices". OR3D. Retrieved 2020-04-01.
  11. ^ "3D scanning technologies - what is 3D scanning and how does it work?". Aniwaa. Retrieved 2020-04-01.
  12. ^ "what is 3d scanning". laserdesign.com.
  13. ^ Hammoudi, K. (2011). Contributions to the 3D city modeling: 3D polyhedral building model reconstruction from aerial images and 3D facade modeling from terrestrial 3D point cloud and images (Thesis). Université Paris-Est. CiteSeerX 10.1.1.472.8586.
  14. ^ Pinggera, P.; Breckon, T.P.; Bischof, H. (September 2012). "On Cross-Spectral Stereo Matching using Dense Gradient Features" (PDF). Proc. British Machine Vision Conference. pp. 526.1–526.12. doi:10.5244/C.26.103. ISBN 978-1-901725-46-9. Retrieved 8 April 2013.
  15. ^ "Seismic 3D data acquisition". Archived from the original on 2016-03-03. Retrieved 2021-01-24.
  16. ^ "Optical and laser remote sensing". Archived from the original on 2009-09-03. Retrieved 2009-09-09.
  17. ^ Brian Curless (November 2000). "From Range Scans to 3D Models". ACM SIGGRAPH Computer Graphics. 33 (4): 38–41. doi:10.1145/345370.345399. S2CID 442358.
  18. ^ Cui, Y., Schuon, S., Chan, D., Thrun, S., & Theobalt, C. (2010, June). 3D shape scanning with a time-of-flight camera. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on (pp. 1173-1180). IEEE.
  19. ^ Franca, J. G. D., Gazziro, M. A., Ide, A. N., & Saito, J. H. (2005, September). A 3D scanning system based on laser triangulation and variable field of view[dead link]. In Image Processing, 2005. ICIP 2005. IEEE International Conference on (Vol. 1, pp. I-425). IEEE.
  20. ^ Roy Mayer (1999). Scientific Canadian: Invention and Innovation From Canada's National Research Council. Vancouver: Raincoast Books. ISBN 978-1-55192-266-9. OCLC 41347212.
  21. ^ François Blais; Michel Picard; Guy Godin (6–9 September 2004). "Accurate 3D acquisition of freely moving objects". 2nd International Symposium on 3D Data Processing, Visualisation, and Transmission, 3DPVT 2004, Thessaloniki, Greece. Los Alamitos, CA: IEEE Computer Society. pp. 422–9. ISBN 0-7695-2223-8.
  22. ^ Salil Goel; Bharat Lohani (2014). "A Motion Correction Technique for Laser Scanning of Moving Objects". IEEE Geoscience and Remote Sensing Letters. 11 (1): 225–228. Bibcode:2014IGRSL..11..225G. doi:10.1109/LGRS.2013.2253444. S2CID 20531808.
  23. ^ "Understanding Technology: How Do 3D Scanners Work?". Virtual Technology. Retrieved 8 November 2020.
  24. ^ Sirat, G., & Psaltis, D. (1985). Conoscopic holography. Optics letters, 10(1), 4-6.
  25. ^ K. H. Strobl; E. Mair; T. Bodenmüller; S. Kielhöfer; W. Sepp; M. Suppa; D. Burschka; G. Hirzinger (2009). "The Self-Referenced DLR 3D-Modeler" (PDF). Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), St. Louis, MO, USA. pp. 21–28.
  26. ^ K. H. Strobl; E. Mair; G. Hirzinger (2011). "Image-Based Pose Estimation for 3-D Modeling in Rapid, Hand-Held Motion" (PDF). Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2011), Shanghai, China. pp. 2593–2600.
  27. ^ Trost, D. (1999). U.S. Patent No. 5,957,915. Washington, DC: U.S. Patent and Trademark Office.
  28. ^ Song Zhang; Peisen Huang (2006). "High-resolution, real-time 3-D shape measurement". Optical Engineering: 123601.
  29. ^ Kai Liu; Yongchang Wang; Daniel L. Lau; Qi Hao; Laurence G. Hassebrook (2010). "Dual-frequency pattern scheme for high-speed 3-D shape measurement" (PDF). Optics Express. 18 (5): 5229–5244. Bibcode:2010OExpr..18.5229L. doi:10.1364/OE.18.005229. PMID 20389536.
  30. ^ Song Zhang; Daniel van der Weide; James H. Oliver (2010). "Superfast phase-shifting method for 3-D shape measurement". Optics Express. 18 (9): 9684–9689. Bibcode:2010OExpr..18.9684Z. doi:10.1364/OE.18.009684. PMID 20588818.
  31. ^ Yajun Wang; Song Zhang (2011). "Superfast multifrequency phase-shifting technique with optimal pulse width modulation". Optics Express. 19 (6): 9684–9689. Bibcode:2011OExpr..19.5149W. doi:10.1364/OE.19.005149. PMID 21445150.
  32. ^ "Geodetic Systems, Inc". www.geodetic.com. Retrieved 2020-03-22.
  33. ^ "What Camera Should You Use for Photogrammetry?". 80.lv. 2019-07-15. Retrieved 2020-03-22.
  34. ^ "3D Scanning and Design". Gentle Giant Studios. Archived from the original on 2020-03-22. Retrieved 2020-03-22.
  35. ^ Semi-Automatic building extraction from LIDAR Data and High-Resolution Image
  36. ^ 1Automated Building Extraction and Reconstruction from LIDAR Data (PDF) (Report). p. 11. Retrieved 9 September 2019.
  37. ^ "Terrestrial laser scanning". Archived from the original on 2009-05-11. Retrieved 2009-09-09.
  38. ^ Haala, Norbert; Brenner, Claus; Anders, Karl-Heinrich (1998). "3D Urban GIS from Laser Altimeter and 2D Map Data" (PDF). Institute for Photogrammetry (IFP).
  39. ^ Ghent University, Department of Geography
  40. ^ "Glossary of 3d technology terms". 23 April 2018.
  41. ^ W. J. Walecki; F. Szondy; M. M. Hilali (2008). "Fast in-line surface topography metrology enabling stress calculation for solar cell manufacturing allowing throughput in excess of 2000 wafers per hour". Meas. Sci. Technol. 19 (2): 025302. doi:10.1088/0957-0233/19/2/025302.
  42. ^ Vexcel FotoG
  43. ^ "3D data acquisition". Archived from the original on 2006-10-18. Retrieved 2009-09-09.
  44. ^ "Vexcel GeoSynth". Archived from the original on 2009-10-04. Retrieved 2009-10-31.
  45. ^ "Photosynth". Archived from the original on 2017-02-05. Retrieved 2021-01-24.
  46. ^ 3D data acquisition and object reconstruction using photos
  47. ^ 3D Object Reconstruction From Aerial Stereo Images (PDF) (Thesis). Archived from the original (PDF) on 2011-07-24. Retrieved 2009-09-09.
  48. ^ "Agisoft Metashape". www.agisoft.com. Retrieved 2017-03-13.
  49. ^ "RealityCapture". www.capturingreality.com/. Retrieved 2017-03-13.
  50. ^ "3D data acquisition and modeling in a Topographic Information System" (PDF). Archived from the original (PDF) on 2011-07-19. Retrieved 2009-09-09.
  51. ^ "Franz Rottensteiner article" (PDF). Archived from the original (PDF) on 2007-12-20. Retrieved 2009-09-09.
  52. ^ Semi-automatic extraction of buildings based on hybrid adjustment using 3D surface models and management of building data in a TIS by F. Rottensteiner
  53. ^ "Multi-spectral images for 3D building detection" (PDF). Archived from the original (PDF) on 2011-07-06. Retrieved 2009-09-09.
  54. ^ "Science of tele-robotic rock collection". European Space Agency. Retrieved 2020-01-03.
  55. ^ Scanning rocks, retrieved 2021-12-08
  56. ^ Larsson, Sören; Kjellander, J.A.P. (2006). "Motion control and data capturing for laser scanning with an industrial robot". Robotics and Autonomous Systems. 54 (6): 453–460. doi:10.1016/j.robot.2006.02.002.
  57. ^ Landmark detection by a rotary laser scanner for autonomous robot navigation in sewer pipes, Matthias Dorn et al., Proceedings of the ICMIT 2003, the second International Conference on Mechatronics and Information Technology, pp. 600- 604, Jecheon, Korea, Dec. 2003
  58. ^ Remondino, Fabio. "Heritage recording and 3D modeling with photogrammetry and 3D scanning." Remote Sensing 3.6 (2011): 1104-1138.
  59. ^ Bewley, A.; et al. "Real-time volume estimation of a dragline payload" (PDF). IEEE International Conference on Robotics and Automation. 2011: 1571–1576.
  60. ^ Management Association, Information Resources (30 September 2012). Geographic Information Systems: Concepts, Methodologies, Tools, and Applications: Concepts, Methodologies, Tools, and Applications. IGI Global. ISBN 978-1-4666-2039-1.
  61. ^ Murphy, Liam. "Case Study: Old Mine Workings". Subsurface Laser Scanning Case Studies. Liam Murphy. Archived from the original on 2012-04-18. Retrieved 11 January 2012.
  62. ^ "Forensics & Public Safety". Archived from the original on 2013-05-22. Retrieved 2012-01-11.
  63. ^ "The Future of 3D Modeling". GarageFarm. 2017-05-28. Retrieved 2017-05-28.
  64. ^ Curless, B., & Seitz, S. (2000). 3D Photography. Course Notes for SIGGRAPH 2000.
  65. ^ "Códigos QR y realidad aumentada: la evolución de las cartas en los restaurantes". La Vanguardia (in Spanish). 2021-02-07. Retrieved 2021-11-23.
  66. ^ "Crime Scene Documentation".
  67. ^ Lamine Mahdjoubi; Cletus Moobela; Richard Laing (December 2013). "Providing real-estate services through the integration of 3D laser scanning and building information modelling". Computers in Industry. 64 (9): 1272. doi:10.1016/j.compind.2013.09.003.
  68. ^ "Matterport Surpasses 70 Million Global Visits and Celebrates Explosive Growth of 3D and Virtual Reality Spaces". Market Watch. Market Watch. Retrieved 19 December 2016.
  69. ^ "The VR Glossary". Retrieved 26 April 2017.
  70. ^ Daniel A. Guttentag (October 2010). "Virtual reality: Applications and implications for tourism". Tourism Management. 31 (5): 637–651. doi:10.1016/j.tourman.2009.07.003.
  71. ^ Gillespie, Katie (May 11, 2018). "Virtual reality translates into real history for iTech Prep students". The Columbian. Retrieved 2021-12-09.
  72. ^ Paolo Cignoni; Roberto Scopigno (June 2008). "Sampled 3D models for CH applications: A viable and enabling new medium or just a technological exercise?" (PDF). ACM Journal on Computing and Cultural Heritage. 1 (1): 1–23. doi:10.1145/1367080.1367082. S2CID 16510261.
  73. ^ Scopigno, R.; Cignoni, P.; Pietroni, N.; Callieri, M.; Dellepiane, M. (November 2015). "Digital Fabrication Techniques for Cultural Heritage: A Survey". Computer Graphics Forum. 36: 6–21. doi:10.1111/cgf.12781. S2CID 26690232.
  74. ^ "CAN AN INEXPENSIVE PHONE APP COMPARE TO OTHER METHODS WHEN IT COMES TO 3D DIGITIZATION OF SHIP MODELS - ProQuest". www.proquest.com. Retrieved 2021-11-23.
  75. ^ "Submit your artefact". www.imaginedmuseum.uk. Retrieved 2021-11-23.
  76. ^ "Scholarship in 3D: 3D scanning and printing at ASOR 2018". The Digital Orientalist. 2018-12-03. Retrieved 2021-11-23.
  77. ^ Marc Levoy; Kari Pulli; Brian Curless; Szymon Rusinkiewicz; David Koller; Lucas Pereira; Matt Ginzton; Sean Anderson; James Davis; Jeremy Ginsberg; Jonathan Shade; Duane Fulk (2000). "The Digital Michelangelo Project: 3D Scanning of Large Statues" (PDF). Proceedings of the 27th annual conference on Computer graphics and interactive techniques. pp. 131–144.
  78. ^ Roberto Scopigno; Susanna Bracci; Falletti, Franca; Mauro Matteini (2004). Exploring David. Diagnostic Tests and State of Conservation. Gruppo Editoriale Giunti. ISBN 978-88-09-03325-2.
  79. ^ David Luebke; Christopher Lutz; Rui Wang; Cliff Woolley (2002). "Scanning Monticello".
  80. ^ "Tontafeln 3D, Hetitologie Portal, Mainz, Germany" (in German). Retrieved 2019-06-23.
  81. ^ Kumar, Subodh; Snyder, Dean; Duncan, Donald; Cohen, Jonathan; Cooper, Jerry (6–10 October 2003). "Digital Preservation of Ancient Cuneiform Tablets Using 3D-Scanning". 4th International Conference on 3-D Digital Imaging and Modeling (3DIM), Banff, Alberta, Canada. Los Alamitos, CA, USA: IEEE Computer Society. pp. 326–333. doi:10.1109/IM.2003.1240266.
  82. ^ Mara, Hubert; Krömker, Susanne; Jakob, Stefan; Breuckmann, Bernd (2010), "GigaMesh and Gilgamesh — 3D Multiscale Integral Invariant Cuneiform Character Extraction", Proceedings of VAST International Symposium on Virtual Reality, Archaeology and Cultural Heritage, Palais du Louvre, Paris, France: Eurographics Association, pp. 131–138, doi:10.2312/VAST/VAST10/131-138, ISBN 9783905674293, ISSN 1811-864X, retrieved 2019-06-23
  83. ^ Mara, Hubert (2019-06-07), HeiCuBeDa Hilprecht – Heidelberg Cuneiform Benchmark Dataset for the Hilprecht Collection, heiDATA – institutional repository for research data of Heidelberg University, doi:10.11588/data/IE8CCN
  84. ^ Mara, Hubert (2019-06-07), HeiCu3Da Hilprecht – Heidelberg Cuneiform 3D Database - Hilprecht Collection, heidICON – Die Heidelberger Objekt- und Multimediadatenbank, doi:10.11588/heidicon.hilprecht
  85. ^ Mara, Hubert; Bogacz, Bartosz (2019), "Breaking the Code on Broken Tablets: The Learning Challenge for Annotated Cuneiform Script in Normalized 2D and 3D Datasets", Proceedings of the 15th International Conference on Document Analysis and Recognition (ICDAR), Sydney, Australia
  86. ^ Scott Cedarleaf (2010). "Royal Kasubi Tombs Destroyed in Fire". CyArk Blog. Archived from the original on 2010-03-30. Retrieved 2010-04-22.
  87. ^ Gabriele Guidi; Laura Micoli; Michele Russo; Bernard Frischer; Monica De Simone; Alessandro Spinetti; Luca Carosso (13–16 June 2005). "3D digitisation of a large model of imperial Rome". 5th international conference on 3-D digital imaging and modeling : 3DIM 2005, Ottawa, Ontario, Canada. Los Alamitos, CA: IEEE Computer Society. pp. 565–572. ISBN 0-7695-2327-7.
  88. ^ Payne, Emma Marie (2012). "Imaging Techniques in Conservation" (PDF). Journal of Conservation and Museum Studies. Ubiquity Press. 10 (2): 17–29. doi:10.5334/jcms.1021201.
  89. ^ Iwanaga, Joe; Terada, Satoshi; Kim, Hee-Jin; Tabira, Yoko; Arakawa, Takamitsu; Watanabe, Koichi; Dumont, Aaron S.; Tubbs, R. Shane (2021). "Easy three-dimensional scanning technology for anatomy education using a free cellphone app". Clinical Anatomy. 34 (6): 910–918. doi:10.1002/ca.23753. ISSN 1098-2353. PMID 33984162. S2CID 234497497.
  90. ^ Takeshita, Shunji (2021-03-19). "生物の形態観察における3Dスキャンアプリの活用". Hiroshima Journal of School Education. 27: 9–16. doi:10.15027/50609. ISSN 1341-111X.
  91. ^ Gurses, Muhammet Enes; Gungor, Abuzer; Hanalioglu, Sahin; Yaltirik, Cumhur Kaan; Postuk, Hasan Cagri; Berker, Mustafa; Türe, Uğur (2021). "Qlone®: A Simple Method to Create 360-Degree Photogrammetry-Based 3-Dimensional Model of Cadaveric Specimens". Operative Neurosurgery. 21 (6): E488–E493. doi:10.1093/ons/opab355. PMID 34662905. Retrieved 2021-10-18.
  92. ^ Christian Teutsch (2007). Model-based Analysis and Evaluation of Point Sets from Optical 3D Laser Scanners (PhD thesis).
  93. ^ "3D scanning technologies". Retrieved 2016-09-15.
  94. ^ Timeline of 3D Laser Scanners
  95. ^ "Implementing data to GIS map" (PDF). Archived from the original (PDF) on 2003-05-06. Retrieved 2009-09-09.
  96. ^ 3D data implementation to GIS maps
  97. ^ Reconstruction software

Retrieved from "https://en.wikipedia.org/w/index.php?title=3D_scanning&oldid=1115092624"