WO2016146105A1 - Verfahren und vorrichtung zur kalibration einer kamera - Google Patents

Verfahren und vorrichtung zur kalibration einer kamera Download PDF

Info

Publication number
WO2016146105A1
WO2016146105A1 PCT/DE2016/100112 DE2016100112W WO2016146105A1 WO 2016146105 A1 WO2016146105 A1 WO 2016146105A1 DE 2016100112 W DE2016100112 W DE 2016100112W WO 2016146105 A1 WO2016146105 A1 WO 2016146105A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
screen
pixel
image
image value
Prior art date
Application number
PCT/DE2016/100112
Other languages
German (de)
English (en)
French (fr)
Inventor
Harald Hoppe
Original Assignee
Hochschule Offenburg
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hochschule Offenburg filed Critical Hochschule Offenburg
Publication of WO2016146105A1 publication Critical patent/WO2016146105A1/de

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • the present invention relates to a method and apparatus for calibrating a camera, and more particularly to non-model based calibration of cameras using a monitor.
  • Digital cameras typically use a CCD sensor or a CMOS sensor to capture an image through a plurality of pixels. For many applications, it is not sufficient to capture only a digital image. Rather, information is desirable, such as from which directions the individual pixels of the camera receive light. To achieve this, a calibration of the camera is used, which, for example, associates an optical path with a pixel. This makes it possible to determine the position of a detected pixel for each pixel in a certain image area (for example perpendicular to the viewing direction). Thus, the calibration determines those points which a particular camera pixel is viewing or which point of the scene under consideration onto which camera pixels are imaged.
  • the use of cameras for the purpose of industrial or non-industrial (eg medical) image processing usually depends largely on how accurately the cameras used can be calibrated.
  • the current state of the art for calibrating cameras is based on the basic assumption that the camera to be calibrated can be described sufficiently accurately by a hole camera model.
  • this model there is a unique projection center, through which all the lines that are assigned to the individual camera pixels run.
  • these models usually require that the real scene is largely undistorted on the image plane of the camera.
  • another model is used to describe radial and tangential lens distortion effects.
  • the overall model for describing the imaging properties of a camera is based on six extrinsic parameters (three rotational and three translatory degrees of freedom) used to describe the transformation from the world coordinate system to the camera coordinate system and five intrinsic parameters used to describe the perspective projection onto a projection center.
  • the radial lens distortion is usually described with two to three parameters, the tangential with two parameters.
  • the total model thus includes 16 parameters to be calibrated, of which, however, only 12 are usually used.
  • patterns representing points with known 3-dimensional (3-D) world coordinates are held in front of the camera and the associated 2-dimensional (2-D) image coordinates representing these points are determined.
  • three to ten images of a flat calibration pattern with approximately 50 to 250 uniquely definable feature points are used.
  • FIG. 8 shows a calibration pattern which is used to determine the point correspondences and represents a plurality of black dots on a white surface.
  • the calibration pattern is captured by the camera, with the camera pixels detecting either a white screen background or a black dot imaged thereon.
  • a first calibration point 501, a second calibration point 502 and a third calibration point 503 are shown in a central area.
  • the calibration points 501 to 503 define a local coordinate system with respect to which the coordinates of the centers of all circles are determined, for example by simply counting the black points to the calibration points. After the point correspondences have been determined, the corresponding models are taken to determine the parameters.
  • a similar calibration method is described in DE 10 2010 031 215, where a three-dimensional object is used for calibration, based on a model for an intensity distribution of the used CCD chip and the calibration determines the model coefficients.
  • Another conventional method is described in DE 197 27 281 C1, where a three-dimensional test structure is generated by means of a hologram and is used for calibration, assuming idealized camera optics.
  • the number of dot correspondences used to optimize the parameters of the underlying model is usually in a range of 100 to 1000. Compared to the number of pixels of the camera to be calibrated, ranging from 1 to 5 million or even more pixels can be, this is often too little.
  • the present invention relates to a method of calibrating a camera using a screen, wherein the screen has a set of pixels and the camera uses a plurality of pixels to display the image.
  • the method comprises the following steps: (a) displaying at least one image value in at least one pixel of the screen based on an image value assignment; (b) detecting the at least one image value by a pixel of the camera; and (c) determining the position of the at least a pixel on the screen based on the at least one captured image value and the image value assignment.
  • the method further comprises shifting the screen or the camera in one direction by an amount such that the at least one pixel is at a different distance from the pixel of the camera than before the shifting, and repeating at least the aforementioned steps for the shifted screen.
  • the camera includes a variable focus when moving the screen relative to the camera, and the method further comprises storing an association with the pixel and the positions of the at least one pixel for different shifted screen positions.
  • the pixels of the camera are individually calibrated. It is understood that the position determination requires that the position of the detected image value on the screen is known. This is done by the image value assignment, which determines which image value is displayed at which position on the screen.
  • the pixels themselves may be constructed of screen pixels, i. A pixel can consist of one or more screen pixels.
  • the subject invention solves the above-mentioned technical problem by calibrating each individual camera pixel independently of all other pixels (not model-based) with the aid of a screen.
  • a commercially available screen for example monitor, television or another display
  • the only requirement is that he is able to generate a corresponding calibration pattern.
  • the present invention thus includes a new, monitor-based and parameter-free method for calibrating cameras.
  • the result of the calibration may be written to a table (e.g., a lookup table) by storing for each camera pixel a set of points mapped to that pixel. In the simplest case, these are in each case straight lines for each pixel, provided the camera focus is not changed during the calibration.
  • the steps (a) to (c) defined for a particular pixel of the camera may be performed in parallel for all pixels of the camera or for a discrete set of pixels of the camera.
  • an interpolation can be used to unambiguously assign the intervening pixels as well.
  • the interpolation can also be used to perform subpixel-exact mapping. The more image correspondence pixel / pixel of the camera are determined, the more accurate the result, but at the same time also increases the computational effort required to calibrate all pixels.
  • the present invention also relates to a method of calibrating a unit of a first camera and a second camera using a screen, wherein the screen has a set of pixels and the first camera uses a plurality of pixels to display an image and the second Camera uses a plurality of pixels for displaying an image and the first and second camera are in a predetermined relation to each other.
  • the calibration comprises the steps of: (a) displaying at least one image value in at least one pixel of the screen based on an image value assignment; (b) detecting the at least one image value by a pixel of the first camera and by a pixel of the second camera; and (c) determining the position of the at least one pixel on the screen based on the at least one captured image value (BW) and the image value assignment.
  • this method is not limited to two cameras. Rather, so that a variety of cameras can be calibrated in parallel.
  • the present invention also relates to a method for determining a shift direction of a screen relative to a camera.
  • the method includes (A) calibrating the camera using a screen, the screen having a set of pixels, the camera using a plurality of pixels to display an image, and calibrating comprising the steps of: (a) displaying at least one image value in at least one pixel of the screen based on an image value assignment; (b) detecting the at least one image value by a pixel of the camera; and (c) determining the position of the at least one pixel on the screen based on the at least one captured image value and the image value assignment; (j) shifting the screen or camera along the direction of displacement by an amount such that the at least one pixel is at a different distance from the pixel of the camera than prior to translation, and (g) repeating at least steps (b) and (c) for the moved screen (120i).
  • the method further comprises: (B) rotating the translation direction or the screen so that the camera is slidable relative to the screen along another direction of translation; (C) recovering the calibration according to steps (A); and (D) determining the direction of displacement from a comparison of calibrations (A) and (C).
  • the present invention also relates to a method for determining refractive properties of a protective layer (eg windshield or coating) of a screen using an uncalibrated camera, wherein the camera uses a plurality of pixels to display an image and the screen has a set of pixels ,
  • the method comprises the following steps: (A) displaying at least one image value in at least one pixel of the screen; (B) detecting the at least one image value by a pixel of the camera; (C) rotating the camera or screen about an axis of rotation by an angle; (D) detecting the at least one image value by the pixel of the camera; and (E) determining the refractive properties from a spatial displacement of the detected image value in step (B) in comparison to the detected image value in step (D) and of the angle.
  • a protective layer eg windshield or coating
  • the axis of rotation is in the screen plane or in the vicinity thereof.
  • the viewing line is always in rotation in a plane that is perpendicular to the monitor plane.
  • the image value being a color value or a grayscale value of a digital image captured by the camera.
  • the screen has a very high resolution, it is possible for each pixel of the camera to receive a different image value (eg, a different color value or gray scale value), thus accurately determining the direction where the corresponding pixel of the camera will look ,
  • the screen defines a unique (world) coordinate system that assigns each pixel a unique 3D coordinate.
  • further exemplary embodiments relate to a method wherein the at least one image value assigns a color value or a grayscale value of a digital image and the image value assignment respectively adjacent pixels of the screen different image values from a plurality of image values such that in at least a part of the screen a unique assignment of Image values allows a location based on the image values.
  • the calibration can be performed as a successive process in which screen portions (for example, horizontal positions and vertical positions) are successively determined. Likewise, it is not necessary that, for example, the entire horizontal extent of a screen is calibrated in one step. Instead, individual sections or periods can be determined, within which an unambiguous assignment takes place, so that, with knowledge of the position of the period on the screen, an unambiguous assignment of the pixel points to the screen points is made possible.
  • step (a) may comprise displaying on the screen at least one period of a periodic pattern of image values, the image value being an image value from the periodic pattern, and step (b) further comprising the steps of: (b1) changing the Image value to a changed image value that is part of the periodic pattern of image values; (b2) detecting the changed image value by the camera; and (b3) determining a difference of the acquired image values and the detected changed image value for the at least one pixel.
  • the step (c) may then include determining the position within a period in a period direction of the periodic pattern for the at least one pixel based on the determined difference.
  • further embodiments include a method comprising the steps of: (d) rotating the periodic pattern of image values on the screen by a predetermined angle; and (e) repeating steps (b1) through (b3) for the rotated periodic pattern.
  • a periodic pattern whose period length extends over the entire screen (for example, if there are enough image values to calibrate all the pixels of the camera in the horizontal direction), such a periodic pattern can be used.
  • the period length can be selected, for example, such that a clear detection of the image values (or the differences between the image values) by the corresponding pixel becomes possible. If several such periods are used, in further steps, a determination of the period in which the particular pixel looks can take place. After determination of the period a clear point correspondence is reached.
  • each period may be characterized by a period offset, the period offset indicating, for example, how many periods the period under consideration is shifted from the fixed point.
  • the period offset is determined according to further embodiments by representing a plurality of periods of the periodic pattern in step (a) and the method further comprising the steps of: (f) displaying a fixed image pattern having a predetermined position on the screen, each Period of the plurality of periods has a period offset relative to the fixed image pattern; (g) acquiring the fixed image pattern by at least one particular pixel of the camera; and (h) assigning the predetermined position to the at least one particular pixel; and (i) determining the period offset of the period for which the position was determined in step (c) based on the predetermined position and the position of the at least one particular pixel determined in step (c). This can be done, for example, by counting the periods that lie between the fixed image pattern and the period (for which the position was determined in step (c)).
  • the predetermined position can be defined for example via a center of the image pattern.
  • the periodic pattern is detected only by a part of the camera and can optionally be represented by a harmonic function.
  • the method may then further include shifting the periodic pattern to another part of the camera, with the part and the other part of the camera partially overlapping; and customizing.
  • combining calibration for the part of the camera with calibration for the other part of the camera can be made so that a common calibration for the union of the part and the other part of the camera is obtained.
  • the distance of the screen and the relative position to the camera need not be measured explicitly. Instead, other embodiments allow determination of the distance of the screen and its location relative to the camera. This can be done, for example, by further executing the following steps in a method: moving the screen or the camera by an amount, so that the at least one pixel of the camera has a different distance to the screen than before the move; and repeating at least steps (b) and (c), ie, detecting and determining the shifted screen. Optionally, part or all of the above steps may also be repeated for the shifted position.
  • the screen may be shifted multiple times, such that for each pixel, a plurality of positions are determined, each relating to different distances of the screen.
  • the number can for example be adapted to the desired calibration quality.
  • the camera either has a fixed focus which remains fixed during the detection steps or a variable focus which is changed by the camera when the screen is moved and / or rotated.
  • each camera pixel can be assigned a straight line that contains all the dots that are mapped to a specific pixel.
  • each pixel can be assigned a set of points that are mapped to a specific pixel, which points can be as close as desired. But even in this case, each pixel can be assigned a direction with a certain focus.
  • the relative position of the individual screen positions among each other is known, it can be achieved by repeating the said steps that it is possible to uniquely determine for each pixel of the camera where each individual pixel of the camera is facing. However, it is often not easy or errory to determine the exact relative position of the individual screen positions among each other (e.g., by measurement).
  • Such a determination is achieved according to further embodiments by the following steps: rotating the screen by a predetermined angle; Repeating at least steps (a) to (d) for the rotated screen position; and determining the direction of the shift of the screen based on a comparison between the result obtained in the unrotated position and the result obtained in the rotated screen position.
  • part or all of the above steps may also be repeated for the rotated position.
  • advantageous embodiments of the present invention are based in particular on a dynamic sequence of a periodic pattern, wherein the period can be selected so that only a small number of image values by the camera must be clearly detectable and a successive determination of the position of the image value, which a pixel of the camera is detected can take place.
  • a windshield of the screen / display may possibly cause a refraction of light, which can lead to a distortion of the underlying pixel positions, which should be taken into account and excluded in the end result. Therefore, in other embodiments, the screen includes a windshield, and the method further includes correcting for distortions caused by the windshield.
  • Embodiments also include a storage medium having a computer program stored thereon and configured to cause a controller to execute a previously described method when running on a processor (processing unit).
  • the storage medium may be a machine-readable medium including mechanisms for storing or transmitting data in a form readable by a machine (e.g., a computer).
  • embodiments include apparatus for calibrating a camera that uses a plurality of pixels to display the image using a screen, the screen having a set of pixels.
  • the apparatus includes the following features: an output module configured to drive the screen so that the screen presents at least one image value in at least one pixel based on an image value assignment; an input module configured to input at least one image value captured by the camera; and a processing module configured to determine the position of the at least one pixel based on the at least one acquired image value and the image value assignment.
  • the device may, for example, be a control module with a processor running the computer program.
  • FIG. 1 shows a flow chart for a method for calibrating a camera
  • FIG. 2 shows a schematic representation of the assignment of screen dots to camera pixels
  • 3 shows an exemplary change of image values of a screen point with time
  • FIG. 7 shows a change in the screen position for determining an optical path
  • Fig. 8 is a conventional calibration pattern
  • Fig. 1 shows a method of calibrating a camera using a screen, wherein the screen has a set of pixels and the camera uses a plurality of pixels to display the image.
  • the method comprises the steps of: displaying S1 10 at least one image value in at least one pixel of the screen based on an image value assignment; Detecting S120 of the at least one image value by a pixel of the camera; and determining S130 the position of the at least one pixel on the screen based on the at least one captured image value and the image value assignment.
  • FIG. 2 shows a representation with further details for the calibration of the camera 110 using the screen 120 according to an embodiment of the present invention.
  • the camera 110 has an image plane 1 13, in which a plurality of camera pixels 112 are arranged.
  • the camera pixels 1 12 can be arrayed on the image plane 113 may be arranged, wherein a first set of camera pixels 1 12 may be arranged along the horizontal line and a second amount along the vertical line.
  • the coordinate system can be defined by the (first) screen position in the present invention.
  • the orientation of the camera towards the screen generally does not matter.
  • the coordinate system can also be chosen differently.
  • the camera 110 may be spaced from the screen 120 along the z-direction, wherein the screen 120 and / or the image plane 1 13 within the camera 110 may extend in the x.y plane.
  • the x-direction may represent the horizontal direction and the y-direction the vertical direction of the recorded scene.
  • the coordinate system may still be chosen differently.
  • the camera 110 may further comprise an optical system 114, through which light signals 1 18 run from the screen 120, in order subsequently to be projected onto the image plane 1 13 of the camera 110.
  • the screen 120 may be divided into a plurality of pixels 122, which may be defined, for example, such that each camera pixel 112 receives light signals from a pixel 122 of the screen 120. If the screen also has pixels, each pixel may include one or more screen pixels.
  • the screen can be, for example, a monitor or display of a computer, a television or another projection device on which, for example, moving images can be displayed.
  • the number of pixels 122 on the screen need not be correlated with the number of camera pixels 112.
  • the number of pixels 122 on the screen 120 may be less than the resolution of the camera 110 (number of camera pixels 112).
  • a correspondingly high-resolution screen 120 is used, which has the same number of (or more) pixels 122 as the number of camera pixels 112.
  • the screen 120 is as high-resolution as possible .
  • a direction is determined from which a given camera pixel 112 receives a light signal or, in other words, in which direction the corresponding camera pixel 112 is looking. If, for example, the light signal along an optical path 1 18 is projected from a first pixel 122a of the screen 120 onto a first camera pixel 1 12a (after passing the optic 1 14), a determination of the location takes place during the calibration of the first camera pixel 11a of the first pixel 122a. Thus, an association of the camera pixel 112a with the position of the first pixel 122a is obtained. This assignment is not only for a first camera pixel 1 12a, but can be done for each camera pixel 112. The calibration thus determines Also, a second camera pixel 112b receives a light signal from a second pixel 122b at a particular position on the screen 120.
  • each individual camera pixel 1 12 it can be determined for each individual camera pixel 1 12 to which pixel 122 this one pixel is looking. It should be noted, however, that the camera pixel 112 itself has no further information regarding adjacent camera pixels or the orientation of the screen 120. In order to be able to determine the position of the pixel 122 independently of all other information, the corresponding image value represented by the pixel 122 is determined. If, for example, all the pixels 122 of the screen 120 represent a different image value (for example, an own gray value or color value) and the camera 110 is designed to determine all displayed image values, a calibration of all the camera pixels 1 12 can already take place by a single acquisition of the screen 120 ,
  • each pixel 122 changes over time the image value BW shown.
  • the change may, for example, be periodic, and upon passage of the period, it may be determined which image value was initially displayed on the respective pixel 122 (assuming the period is known).
  • the optical path 118 is unique given that all optical paths in a point of optics (or lens) 1 14 cut. However, even if all paths intersect in one point, one generally does not know this intersection and can not calculate straight lines. NEN. In such a case, the calibration could also be done with fewer steps. Since these conditions are often not met, the calibration will take place in several steps, which will be described in more detail below (see FIG. 7).
  • FIG 3 shows an exemplary embodiment in which different image values BW1 to BW6 are traversed at a specific pixel 122a over time t.
  • a first image value BW1 can be displayed on the specific pixel 122a for a first time t1.
  • a second time t2 there is a change to a second image value BW2
  • a third time t3 a change takes place to a third image value BW3, etc., up to a sixth time t6, where a change to a sixth image value BW6 occurs.
  • the image values BW can be traversed in reverse order (for example decreasing again), until the eleventh time t11 again changes to the first image value BW1.
  • image values BW can increase uniformly and relate, for example, to different gray levels or to color values. It is only important that the image values BW are different, it being advantageous if the changes between the image values BW are strong enough so that the camera 110 can detect the change. From the acquisition of all image values BW within the one period from the first time t1 to the eleventh time t11, it is possible to determine which image value BW was displayed at the starting time on the corresponding pixel.
  • the pixels 122 of the screen 120 are shown horizontally. In this example, there are 80 pixels. These 80 pixels may, for example, refer to the horizontal pixels of the screen 120 or to the vertical pixels or in any other direction. These 80 pixels are subdivided by way of example into 8 periods p, with each period showing 10 image values.
  • the time is shown vertically, so that the 80 pixels at a first time t1 show the image 1 and at a tenth time t10 show the image 10.
  • Each period p shows 10 image values, starting in image 1 with the first image value BW1 (white), the second image value BW2 up to the sixth image value BW6 (black) and then falling from the fifth image value BW5 until the first image value BW1 is reached again.
  • the pattern is similar to the pattern shown in FIG. However, it may be different in other embodiments.
  • this one pixel 122a shows the fifth image value BW5 at the first time t1.
  • the first image value BW1 (white) is displayed, and at the tenth time t10, the fourth image value BW4 is displayed.
  • the period having the period width 10 is traversed once. Based on the knowledge of this particular period, it is possible to determine which image value was displayed at the start time t1 on the corresponding pixel.
  • the six image values shown may refer to gray levels, for example, an 8-bit grayscale pattern being representable (i.e., having a size of 256 image values from 0 (black color) to 255 (white color)).
  • Fig. 5 shows an embodiment of the aforementioned periodic pattern. Grayscale values were again used for illustration. However, (other) color values can also be used.
  • the periodicity runs along the vertical direction, in which example more than eight periods are shown. In other embodiments, any number of periods may be represented.
  • the number of periods can be chosen such that the difference between the image values within a period to neighboring pixels is so large that it can be clearly detected by the corresponding camera. For example, the difference between adjacent image values should be so great that, as they pass through the individual images, as can be seen in FIG. 4, the jumps in the image values BW1, BW2,... Can be detected unambiguously by the camera.
  • the number of periods can be selected depending on the specific camera (eg, their resolution) or the screen and the external conditions. As previously mentioned, it is not yet possible to determine in which period (eg in which of the eight periods of FIG. 4) the particular pixel 122a is located. This can be done, for example, by detecting at least one further pixel for which the exact horizontal and vertical period offset on the screen is known. The absolute period position of a period is known, if this one point is known. Using the known period pattern, the period offsets of all other periods relative to the known period can be determined from the absolute period position of a specific period. Thus, the (absolute) positions of all screen points are known.
  • a pattern is shown on the screen 120 (in front of a background 220), which represents a specific image value at a pixel 124 (calibration pixel).
  • a predetermined number of pixels for example adjacent pixels
  • the calibration pixel 124 is shown in the fourth vertical period and the sixth horizontal period.
  • this pattern is now detected by the camera 110, only one pixel or set of pixels will capture the calibration pixel 124. If the calibration pixel comprises several pixels, a center 125 can be determined, which can then be assigned to only one camera pixel. Thus, when this is accomplished, it is clear that the one camera pixel that has captured the calibration pixel 124 (or its midpoint 125) is "looking" (or its midpoint 125) towards the exemplary fourth vertical period and sixth horizontal period, and thus After the absolute phase (ie, not just within a period) of the pixel 124 having the one calibration image has been accurately captured on the screen 120, the phase of the pixel 124 is known With the well-known periodic pattern, the position of any other period relative to the one known period is also known, so it is also known for all other pixels in which period they "looked" at the previous determination.
  • the first (or any other) position of the screen 120 defines a global (calibration) coordinate system. For example, such that the screen 120 lies in the x.y plane and the z-direction represents the surface normal.
  • the camera 110 receives a light signal 118 from a particular pixel 122a on the screen 120.
  • the camera 110 detects the displayed gray level value (or any other image value) from the sweep of the periodic pattern (see FIGS. 4 and 5).
  • the period offset can be determined by using the calibration pattern as shown in FIG. 6.
  • the screen 120 is shifted (and / or rotated) along a direction z s to a new position 120i and the procedure described above is repeated (see FIGS. 4 to 6).
  • the translation direction z s is not necessarily parallel to the z-axis, and the z-axis may be defined as the direction parallel to the surface normal on the original screen 120 (before translation).
  • a screen point 122i is again determined for the new screen position 120i, which is projected on the same pixel 1 12a in the camera 110.
  • a relative angle of the screen to the z-axis may be determined as follows.
  • the screen 120 is rotated by a predetermined angle and again shifted relative to the camera 1 10 along the rotated axis. The measurement is repeated for such a rotated coordinate system.
  • FIG. 5 shows a snapshot of a monitor image for calibrating the camera 110.
  • the camera 110 is positioned, for example, in such a way that the monitor 120 is visible through the camera 110.
  • a horizontal periodic pattern for example in the form of a sine function
  • This pattern is advanced in equidistant steps (for example, four to as many as desired, preferably eight to ten) by a total of one period.
  • Each pixel of the camera 110, which is currently viewing the monitor 120 now sees a gray value oscillation shifted by a certain phase due to the shift. This phase shift (between 0 and 2 ⁇ ) can be calculated back from the recorded gray values.
  • the monitor 120 Since the monitor 120 displays several gray scale periods, it is finally only necessary to determine the phase position. This can be done by means of the display of another image (the calibration image, as shown in FIG. 6).
  • This calibration image comprises a mark of a known phase position, which may for example be the center of the screen 120 or may also comprise any other point of the screen - it should only be known exactly. Then it is possible to easily determine adjacent phases.
  • the steps shown can also be repeated vertically, so that a calibration with respect to the vertical direction is also possible.
  • the monitor screen position can then also be converted to mm and specified in a world coordinate system W.
  • the origin of the world coordinate system W may be at the center of the monitor 120, with the x-axis after to the right and the y-axis points downwards, and the z-axis is perpendicular thereto, ie a surface normal of the monitor plane is (in the Fig. 7 to the rear).
  • the monitor 120 (or equivalent to the camera 1 10) is moved on a linear guide rail by a certain amount 5 z s, for example, in an unknown direction.
  • the x and y values associated with the monitor pixel 122 are calculated as before, but the z coordinate is measured along the unknown displacement direction.
  • the missing transformation between the sheared coordinate system S and the world coordinate system W which can be used to transform the calibration into a rectangular coordinate system, can be determined by recording further monitor positions. These other monitor positions are not parallel to the calibration positions. At best, the monitor 120 is simply rotated by a certain angle about its vertical axis of symmetry and again shifted on the linear guide rail. The unknown direction zs can now be determined from this data.
  • the calibration is no longer based on a model whose parameters are to be determined. It is simply determined for each pixel of the camera 1 10 (about 1 to 5 million), which world points it sees.
  • the result of the calibration is thus a look-up table in which, for each camera pixel 112, a set of points which are mapped onto this pixel are stored. In the simplest case, these are in each case straight lines, for example if the camera focus was not changed during the calibration.
  • the number of dot correspondences used to determine the imaging characteristics of the overall system of camera, lenses, and optical elements is significantly greater in this process. For example, it can be a multiple of the number of camera pixels, that is, several millions, compared to 100 to 1000 in conventional methods based on a model. The time to determine these correspondences is of comparable magnitude.
  • 9A shows a further embodiment in which the method described above is not used to calibrate a single camera but is used to calibrate a unit of at least two cameras 110, 120, wherein the plurality of cameras 110, 120 are in fixed relation to one another stands.
  • the fixed relation for example, defines the relative position and orientation of the first camera 110 and the second camera 120, and is achieved by attaching the cameras 110, 120 to the guide rail 150.
  • the distance and / or relative tilt between the first camera 110 to the second camera 120 may be predetermined. Shown are only two cameras 110, 120. However, it should be understood that this method is applicable to any number of cameras. With the method, all cameras can be automatically calibrated in the same coordinate system, as they all look at the same positions of the monitor 120 used.
  • the cameras 1 10, 210 simultaneously capture the image value of a monitor pixel 122, the first camera 110 with the camera pixel 122, and the second camera 210 with the camera pixel 212.
  • One of the monitor positions in a preferred embodiment, the first monitor position, defines that for all cameras 1 10, 120 same calibration coordinate system.
  • the origin of the calibration coordinate system lies in the center of the image plane of the first monitor position, the x-axis points horizontally to the right, the y-axis points vertically downward, and the z-axis is perpendicular to the monitor plane and points into it (ie along the guide rail 150).
  • each camera were calibrated in its own calibration coordinate system (and not as a unit), the relative transformations between the individual calibration or camera coordinate system would have to be determined after the actual calibration. According to this embodiment, this step is omitted, which is an advantage. In addition, higher accuracy of the overall system is achieved, which can be used, for example, for triangulating points in space (e.g., in stereoscopic or 3D cameras).
  • the direction of the linear guide rail 150 in the - preferably defined by the first monitor position Kalibrierkoordinatensystem - is not known (see, for example, Fig.7 or Fig. 9). While the z-axis of the Kalibrierkoordinatensystem is perpendicular to the image plane, the monitor 120 and the camera 110 moves by means of linear guide rail 150 is not exactly in the z-direction of the Kalibrierkoordinatensystems, but in an initially unknown direction. If the monitor pixels of the different monitor positions are always assigned the same x and y coordinates, the camera 110 or the cameras 1 10, 120 are first calibrated in a sheared calibration coordinate system whose z-axis is parallel to the direction of the linear guide rail 150.
  • Embodiments make it possible to determine the direction of the linear guide rail 150 in the calibration coordinate system in order to transfer the line of sight of the individual camera pixels 122, 212 from the sheared to the Cartesian calibration coordinate system.
  • the monitor is rotated by an unknown angle after recording the actual calibration positions and again in this new angular position with the aid of the linear guide rail 150 brought in at least one, preferably in a plurality of positions and in turn displayed and recorded the described calibration pattern.
  • the position of the monitor pixels seen by the individual camera pixels 112, 212 that appear distorted in the sheared calibration coordinate system can be unequivocally closed by means of suitable algorithms on the direction of the linear guide rail 150 and thus the shear of the calibration coordinate system can be corrected.
  • FIG. 9B shows an exemplary embodiment for determining refractive properties of a screen 120.
  • an uncalibrated camera 1 10 captures at least one pixel of the screen 120. Subsequently, the camera 110 is rotated about an axis of rotation (at an angle a), which is preferably in a vicinity or at the image plane of the screen 120. Thereafter, the camera 1 10 again detects the pixel, which has moved due to the refraction spatially on the screen surface. Finally, the refractive properties are determined from the observed shift of the pixel.
  • the refractive properties include, for example, a refractive index and / or a thickness of the refractive layer.
  • This exemplary embodiment therefore takes into account the fact that a protective layer (front pane or coating) can still be located above the radiation-generating elements of the monitor 120 whose refractive properties ensure that the underlying monitor pixels are slightly shifted from the camera 110 to be calibrated Be seen. This effect increases as the angle a increases as the viewing beam of a camera pixel views the monitor 120. This shift can be corrected if the refractive properties of the protective layer - such as its thickness and refractive index - are known.
  • the embodiment shown in FIG. 9B precisely measures the refractive properties of the protective layer of the monitor used with the aid of the uncalibrated camera.
  • the uncalibrated camera 220 is mounted on a turntable 140, the axis of rotation of which is preferably near or at the image plane of the monitor 120 to be measured.
  • the uncalibrated camera 110 subsequently views the monitor under known conditions a, with the monitor 120 showing the known horizontal and vertical patterns. From the amount of monitor pixel positions that a single camera pixel has identified at different angles in this way, one can clearly conclude the refractive properties of the protective layer. For example, up to 10 or 40 or more angles can be measured, the invention not being limited to the specific number of angles.
  • each camera pixel 112 is assigned a unique line of sight, ie that the set of all points imaged on the same camera pixel 112 lies exactly on a straight line. This assumption is only correct if the focus of the camera 1 10 is fixed and does not change between calibration and use. If the focus of the camera 1 10 changed or this is operated with automatic readjustment of the focus, this assumption is no longer true.
PCT/DE2016/100112 2015-03-16 2016-03-11 Verfahren und vorrichtung zur kalibration einer kamera WO2016146105A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102015103785.8 2015-03-16
DE102015103785.8A DE102015103785A1 (de) 2015-03-16 2015-03-16 Verfahren und Vorrichtung zur Kalibration einer Kamera

Publications (1)

Publication Number Publication Date
WO2016146105A1 true WO2016146105A1 (de) 2016-09-22

Family

ID=55910057

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/DE2016/100112 WO2016146105A1 (de) 2015-03-16 2016-03-11 Verfahren und vorrichtung zur kalibration einer kamera

Country Status (2)

Country Link
DE (1) DE102015103785A1 (nl)
WO (1) WO2016146105A1 (nl)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598715A (zh) * 2018-12-05 2019-04-09 山西镭谱光电科技有限公司 基于机器视觉的物料粒度在线检测方法
CN110351549A (zh) * 2019-07-23 2019-10-18 Tcl王牌电器(惠州)有限公司 屏幕显示状态检测方法、装置、终端设备及可读存储介质
CN113115027A (zh) * 2020-01-10 2021-07-13 Aptiv技术有限公司 校准摄像头的方法和系统
US11348208B2 (en) * 2018-03-08 2022-05-31 Sony Corporation Signal processing apparatus and signal processing method

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018129814A1 (de) * 2018-11-26 2020-05-28 Technische Universität Darmstadt Verfahren zur Bestimmung von Kalibrierungsparametern einer Kamera
CN111596802B (zh) * 2020-05-26 2022-12-02 Oppo(重庆)智能科技有限公司 一种触摸屏校准方法、装置及计算机可读存储介质
CN113379835A (zh) * 2021-06-29 2021-09-10 深圳中科飞测科技股份有限公司 检测设备的校准方法、装置、设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19727281C1 (de) 1997-06-27 1998-10-22 Deutsch Zentr Luft & Raumfahrt Verfahren und Vorrichtung zur geometrischen Kalibrierung von CCD-Kameras
DE102005061931A1 (de) * 2005-12-23 2007-06-28 Bremer Institut für angewandte Strahltechnik GmbH Verfahren und Vorrichtung zur Kalibrierung einer optischen Einrichtung
US20110026014A1 (en) * 2009-07-31 2011-02-03 Lightcraft Technology, Llc Methods and systems for calibrating an adjustable lens
US20110261211A1 (en) * 2007-11-02 2011-10-27 Abertec Limited Apparatus and method for constructing a direction control map
DE102010031215B3 (de) 2010-07-12 2011-12-29 Carl Zeiss Smt Gmbh Verfahren sowie Anordnung zur Kalibrierung einer CCD-Kamera

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7554575B2 (en) * 2005-10-28 2009-06-30 Seiko Epson Corporation Fast imaging system calibration
US8743214B2 (en) * 2011-05-11 2014-06-03 Intel Corporation Display screen for camera calibration
DE102013014475B4 (de) * 2012-08-29 2020-11-12 Technische Universität Ilmenau Verfahren zur Erkennung und Kompensation von Messabweichungen während des Betriebes einer optischen Messvorrichtung, optisches Messverfahren und optische Messvorrichtung

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19727281C1 (de) 1997-06-27 1998-10-22 Deutsch Zentr Luft & Raumfahrt Verfahren und Vorrichtung zur geometrischen Kalibrierung von CCD-Kameras
DE102005061931A1 (de) * 2005-12-23 2007-06-28 Bremer Institut für angewandte Strahltechnik GmbH Verfahren und Vorrichtung zur Kalibrierung einer optischen Einrichtung
US20110261211A1 (en) * 2007-11-02 2011-10-27 Abertec Limited Apparatus and method for constructing a direction control map
US20110026014A1 (en) * 2009-07-31 2011-02-03 Lightcraft Technology, Llc Methods and systems for calibrating an adjustable lens
DE102010031215B3 (de) 2010-07-12 2011-12-29 Carl Zeiss Smt Gmbh Verfahren sowie Anordnung zur Kalibrierung einer CCD-Kamera

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11348208B2 (en) * 2018-03-08 2022-05-31 Sony Corporation Signal processing apparatus and signal processing method
CN109598715A (zh) * 2018-12-05 2019-04-09 山西镭谱光电科技有限公司 基于机器视觉的物料粒度在线检测方法
CN109598715B (zh) * 2018-12-05 2023-03-24 山西镭谱光电科技有限公司 基于机器视觉的物料粒度在线检测方法
CN110351549A (zh) * 2019-07-23 2019-10-18 Tcl王牌电器(惠州)有限公司 屏幕显示状态检测方法、装置、终端设备及可读存储介质
CN110351549B (zh) * 2019-07-23 2021-11-09 Tcl王牌电器(惠州)有限公司 屏幕显示状态检测方法、装置、终端设备及可读存储介质
CN113115027A (zh) * 2020-01-10 2021-07-13 Aptiv技术有限公司 校准摄像头的方法和系统

Also Published As

Publication number Publication date
DE102015103785A1 (de) 2016-09-22

Similar Documents

Publication Publication Date Title
WO2016146105A1 (de) Verfahren und vorrichtung zur kalibration einer kamera
EP3166312B1 (de) Vorrichtung und verfahren zum justieren und/oder kalibrieren eines multi-kamera moduls sowie verwendung einer solchen vorrichtung
DE102014206309B4 (de) System und Verfahren zum Erhalten von Bildern mit Versatz zur Verwendung für verbesserte Kantenauflösung
DE102006055758B4 (de) Verfahren zur Kalibrierung von Kameras und Projektoren
EP2202994B1 (de) 3D-Kamera zur Raumüberwachung
EP2880853B1 (de) Vorrichtung und verfahren zur bestimmung der eigenlage einer bildaufnehmenden kamera
DE102013014475B4 (de) Verfahren zur Erkennung und Kompensation von Messabweichungen während des Betriebes einer optischen Messvorrichtung, optisches Messverfahren und optische Messvorrichtung
EP2156239A1 (de) Verfahren zur ausrichtung eines optischen elements auf einem bildschirm
EP3775767B1 (de) Verfahren und system zur vermessung eines objekts mittels stereoskopie
EP3136711A1 (de) Verfahren zur erzeugung und auswertung eines bilds
DE102006042311B4 (de) Dreidimensionale Vermessung von Objekten in einem erweiterten Winkelbereich
EP3104330B1 (de) Verfahren zum nachverfolgen zumindest eines objektes und verfahren zum ersetzen zumindest eines objektes durch ein virtuelles objekt in einem von einer kamera aufgenommenen bewegtbildsignal
DE112017001464B4 (de) Abstandsmessvorrichtung und Abstandsmessverfahren
DE102020201814A1 (de) Bildverarbeitungsvorrichtung
EP3822578A1 (de) Adaptiver 3d-scanner mit variablem messbereich
WO2019096339A1 (de) Verfahren zur automatischen wiederherstellung eines eingemessenen zustands eines projektionssystems
DE102004058655A1 (de) Verfahren und Anordnung zum Messen von Geometrien eines Objektes mittels eines Koordinatenmessgerätes
DE102011082280A1 (de) Bildmessvorrichtung und Bildmessverfahren
DE102005061931B4 (de) Verfahren und Vorrichtung zur Kalibrierung einer optischen Einrichtung
DE102013211286A1 (de) Verfahren zur Vermessung eines Werkstücks mit einem optischen Sensor
DE102015117276B4 (de) Verfahren und Vorrichtung zum Vermessen eines Messobjekts mit verbesserter Messgenauigkeit
EP2645331A1 (de) Verfahren zur Verifizierung der Ausrichtung eines Verkehrsüberwachungsgerätes
DE102020107965B3 (de) Verfahren zur optischen Bestimmung einer Intensitätsverteilung
EP3798570B1 (de) Verfahren zur kalibrierung eines optischen messsystems, optisches messsystem und kalibrierobjekt für ein optisches messsystem
EP3185213A1 (de) Verfahren zur erstellung einer tiefenkarte mittels einer kamera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16720038

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16720038

Country of ref document: EP

Kind code of ref document: A1