PhD, University of Illinois at Urbana
Champaign
M.Phil, University of Edinburgh
B.Sc. (Hons.), University of Western Ontario
Email: |
harlyn dot baker at hp dot com |
|
Phone: |
(650) 857-2584 |
Cell: |
(650) 224-8120 |
Fax: |
(650) 852-3791 |
Address: |
Hewlett Packard Laboratories
1501 Page Mill Road, ms 1203
Palo Alto, CA 94304 |
I have worked in the field of computer vision for
about three decades. From early studies in Edinburgh (under
Donald Michie and
Robin Milner) addressing 3D stereo modeling and
recognition, stereo developments at Champaign-Urbana (under
Dave Waltz),
through dynamic-programming stereo research at the Stanford AI Lab
(with Tom Binford)
and some dozen years at SRI, where I co-developed
Epipolar Plane Image (EPI) Analysis with
Bob Bolles and David Marimont and created the first spatiotemporal manifold
representation process, I have been an early and continuing
contributor to the analysis of multi-image data. Applying 3D image
analysis insights broadly, I have analyzed the structural
implications of fossil data (Lucy and her contemporaries) and
performed the earliest simulated surgical procedures/surface
manipulations from acquired imagery.
While at SRI, I received one of the first
government contracts to model anatomy from the NIH's Visible Human
Dataset. The EPI work from that time, called seminal, has been
instrumental in most later developments on Light Field analysis and
holographic display representations, including RaySpace and Hogel
formulations. On leaving Interval Research, I co-founded a real-time
stereo ranging company (TYZX), then joined Hewlett-Packard
Laboratories in 2000, where I was technical lead in developing an
augmented-reality multi-participant videoconferencing technology,
designed and structured camera systems to support this (now moving
into their fourth generation), helped devise a cosmetics
recommendation service using images acquired over cell phones, and
moved more recently to development of automultiscopic imaging and
display systems for 3D interaction and immersive experiences.
My focus throughout has been on exploiting
massively redundant imaging for communication, visualization, and
spatial understanding.