Automotive Design and Production

SEP 2014

Automotive Design & Production is the one media brand invested in delivering your message in print, online, via email, and in-person to the right automotive industry professionals at the right time.

Issue link: https://adp.epubxp.com/i/369856

Contents of this Issue

Navigation

Page 38 of 51

37 plane appears to the driver 2.4 m away, or, in efect, at the end of the hood. The information is in an area that is 210 x 42 mm. It is produced by a picture generating unit (PGU) that consists of a thin-flm transistor (TFT) display that's backlit by LEDs. This is then bounced of of a curved mirror that both enlarges the information and "places" it at the end of the hood. Then there is the second, further layer, the "augmentation" layer. It appears to be 7.5 m in front of the driver. Stephan Cieler, who worked on the human-factors setup for the AR-HUD, says that the distance is important because beyond 7.5 m the information projected would interfere with the vehicles ahead and might interfere with the driving tasks. Digital Mirrors Again, this is done with mirrors. Specifcally, a digital micromirror device (DMD) that is at center to the AR-HUD PGU, or "picture-generating unit." DMD is technology from Texas Instruments ( ti.com ) that is similar to that TI developed for DLP Cinema, its digital approach to showing movies; a DMD for a commercial flm has up to 8.8-million microscopic mirrors. In the case of the AR-HUD, the DMD, an optical semiconductor, includes thousands of tiny mirrors that are electrostatically tilted. The mirrors are illuminated by three primary-colors LEDs—red, green, blue—with some mirrors refecting the light and others allowing it to pass through (each is dedicated to a particular color). This then goes to a focusing screen, then is refected onto a larger mirror, then on to the windshield. The dimension of the augmented viewing area is approximately 130 x 63 cm. Making it possible for the augmentation of reality in front of the driver is the "AR-Creator," which is based on a 1.2- GHz quad-core processor. It takes infor- mation from the radar sensor used for adaptive cruise control, a Continental mono camera used for lane keeping and object detection, and an "eHorizon," which is essentially information generated from the navigation/GPS system (they are looking at using information from things like vehicle- to-vehicle communication and vehicle- to-infrastructure, once this becomes available). Because the vehicle is moving, they use a Kalman flter algorithm (a linear quadratic estimation) that helps determine the future state of afairs (with "future" being on the order of 80 ms). So, all this said, what does the driver see? In the case of adaptive cruise control (ACC) setting, the vehicle in front is marked with a green crescent below its rear fascia, then there are a series of blue trapezoids decreasing in size from the front of the car being driven to the green crescent, with each of the trapezoids signifying distance based on time (between 1 and 1.2 seconds). Should the driver decide to use the accelerator and override the ACC and the car gets too close to the vehicle ahead, then that green crescent turns red. For lane-keeping assist, there are red "cats-eyes" that are a series of red dots that appear "on" the road, on the right or left side, depending on where the car is veering from its intended path. These are based on the road-mounted refectors that are perhaps more familiar to drivers in Europe than in the U.S. (Supplementing the visual cues in the Kia K9 are haptic devices in the seat cushion that provide a left or right vibration.) Then there is the navigation system. In this case, there is a series of blue arrows used to show the route. Just image a series of shapes like this ^ in blue, stacked in the direction that one is to drive, shifting to the left or right in the case of when one is to make a turn. They call it a "fshbone." Cieler says that when working on the interface they ran a number of tests, determining whether it might be better to have something like a solid carpet (think: yellow brick road) in front of the vehicle or solid arrows in front of the vehicle. These proved not to be as useful for the driver. The mantra used during the development of the human-machine interface was, Cieler says, the phase generally associated with Mies van der Rohe: "Less is more." The goal is not to overload the driver with too many inputs but to make the information useful and/or actionable. Making It Smaller Speaking of "less," one of the challenges that Eelco Spoelder says Continental faces as they move toward the AR-HUD series development mark in 2017 is that of size. A conventional HUD system requires some 4 liters (or 244-in 3 ) in the instrument panel. That's about the size of a football. Spoelder says that they're working toward an AR-HUD system that would be about 11 liters in size (or 671-in 3 ). That's about the size of two soccer balls. He says that because they have more than a decade's worth of experience in the development, engineering and manufacturing of HUD systems, he is confdent that they will be able to achieve the smaller size for the AR-HUD. (One of the benefts of the combiner HUD is that it is the most compact of all, requiring only about 2 liters (122-in 3 ), which allows deployment in compact cars.) Space notwithstanding, Spoelder is convinced that AR-HUD is a key enabler as OEMs move more toward automated driving systems: "We are certain that AR-HUD technology will make it even easier for future ADAS [advanced driver assistance systems] functions and automated driving functions to gain acceptance among end customers." Why? Because they feel that even if the vehicle is partially or fully driving itself, people still want to know what's going on. And by having augmented reality in addition to the now-conventional heads-up readouts, there is a greater potential awareness.

Articles in this issue

Links on this page

Archives of this issue

view archives of Automotive Design and Production - SEP 2014