The Endeavour Project at the University of Cambridge (the project formerly known as VBRAD) is investigating the use of computer vision and related technologies in the automotive world, and in particular how we can develop a better understanding of the driver's activities, emotions, and environment. 

The aim is to allow cars and their related systems to make more intelligent decisions about how to keep the driver informed, comfortable, safe, and heading in the direction they want to be going!  This involves several subprojects looking at, for example:

  • When's the best time to deliver different types of notifications to the driver?
  • Can we make satellite navigation systems use more human-friendly instructions?
    "Turn left after the bus stop, where that yellow car is going..."
  • Is the human driver sufficiently awake and alert for their self-driving car to hand back control to them?
  • We are used to recognising some of the characteristics of other human drivers - we treat nearby cars differently, for example, if we think the drivers might be beginners, or distracted, or aggressive. Can autonomous vehicles also recognise these characteristics and make use of them?

(This project has some factors in common with my Augmented Vehicles work, many years ago at AT&T.)