All of tech, and really much of the automotive industry, speaks of driverless cars with the gravitas of inevitability: In the future, driving yourself will be more foreign than you think.
That’s why something like this isn’t just bad for Tesla, it’s bad for everyone betting on this future. Go listen to the leaders at Lyft, Uber or General Motors talk about driverless. Things like this inhibit this vision.
What I sort of question is the response from Tesla on this one. The company blog post started out sympathetic, but then flooded us with a bunch of numbers and statistics. I’m not sure that strikes the right tone after a guy just died.
Anyway, how does the industry recover from this? Just keep marching toward inevitability?
[No doubt: yes.]
Source: Farhad’s and Mike’s Week in Tech: Death, Denial and Self-Driving Cars
I agree with Illah Nourbakhsh, as he says in his thoughtful post on the topic. Excerpt:
There is much, much more to this than statistics or bug-tweaking. There are underlying questions about interaction design: do we design autonomy to replace people in such ways that new forms of error surface, or do we empower people to become incrementally safer, even if it means our technological trajectory is slower and more intentional? You know where I stand.
Source: Layers of Autonomy.
Forbes interviewed Don Norman about Tesla’s partially autonomous “autopilot” feature. Quote:
Tesla is being reckless. From what I can tell, Tesla has no understanding of how real drivers operate and they do not understand the need for careful testing. So they release, and then they have to pull back.
Source: Is Tesla Racing Recklessly Towards Driverless Cars? – Forbes
See the article for some terrifying videos taken by Tesla pseudo-drivers that capture autopilot oops moments.
Don Norman has added more comments on the story at his blog here: http://www.jnd.org/dn.mss/interview_is_tesla_.html. And Tesla CEO Elon Musk says don’t worry, it’s all good. So who are you going to believe — the charismatic billionaire Tony Stark guy, or the nerdy design professor?
Jeremy Bailenson’s Virtual Human Interaction Lab at Stanford has been researching the effects of VR experiences on children. Below is a CBS video (from 2015) that talks about it.
Key soundbite: after kids experience something in virtual reality, 50% of them say a week later that it actually happened in the physical world.
That factoid can sound shocking, but I haven’t read the papers and don’t claim to know all the context and how meaningful or concerning it truly is. And of course even adult memory is famously unreliable (see: The Invisible Gorilla for a good discussion of that and other psychological illusions).
Source: The effects of make-believe: Stanford studies virtual reality, kids – CBS News
Via Therese Dugan and Fatherly.
This is an excellent article by Birgitta Böckeler on the history of software developers and our images of them.
The stereotype of the socially-awkward, white, male programmer has been around for a long time. Although “diversity in tech” is a much discussed topic, the numbers have not been getting any better. On the contrary, a lot of people inside and outside of the IT industry still take it for granted that this stereotype is the natural norm, and this perception is one of the things that is standing in our way to make the profession more inclusive and inviting. So where does this image come from? Did the demographics of the world’s programmer population really evolve naturally, because “boys just like computers more”? What shaped our perception of programmers? This text is about some possible explanations I found when reading about the history of computing.
Read it for the history and insights on what to do about it.
Stop acting so surprised!
Whenever you hear yourself or somebody else saying things like “You don’t look like a programmer”, or “What? You don’t know ___?” – stop right there. It might be an innocent little comment that you don’t mean anything by, but the person you are saying it to might be hearing this for the 500th time, and your comment might be the last straw to make them think that they indeed do not belong. This is why such comments are often called “microaggressions”. Each one is small, too small to really be aggressive, but when they appear every week they have a significant cumulative effect.
Learn more about microaggressions to increase your awareness of this, for example by reading this excellent article about how microaggressions enforce stereotypes in tech.
Source: Born for it: How the image of software developers came about (via Pat Kua)
Hello blog world. I’ve done a little housekeeping here and will be posting again, this time on general UX topics and a little about technology & society. Last year’s experiment of a narrow blog didn’t work, so this year I’m going broad. 🙂
VR is a current focus of my work, so expect a fair number of posts on VR usability and such things.
I’ll be out and about in coming months attending SVVR, CHI 2016, and UXPA 2016. I’d love to chat with similar-minded UX professionals with an interest in VR.
Thanks for reading!
A big open question for VR.
The Vive doesn’t just require a space, it requires the right space.
Source: Does virtual reality fit in your home? | The Verge
Some nice bits in this piece about the subtleties of integrating sensors into UX
“You think you want to detect force, but really what you’re trying to do is sense intent. You’re trying to read minds,” Apple’s Craig Federighi told Bloomberg. “And yet you have a user who might be using his thumb, his finger, might be emotional at the moment, might be walking, might be laying on the couch. These things don’t affect intent, but they do affect what a sensor [inside the phone] sees. So there are a huge number of technical hurdles. We have to do sensor fusion with accelerometers to cancel out gravity—but when you turn [the device] a different way, we have to subtract out gravity. … Your thumb can read differently to the touch sensor than your finger would. That difference is important to understanding how to interpret the force. And so we’re fusing both what the force sensor is giving us with what the touch sensor is giving us about the nature of your interaction. So down at even just the lowest level of hardware and algorithms—I mean, this is just one basic thing. And if you don’t get it right, none of it works.”
Source: 3D Touch is a hard problem that Apple got just right | Macworld
Here are slides from my session last week at the Intel Developer Forum (joint presentation with Meghana Rao). These are some high-level UX principles to keep in mind when creating apps for depth cameras.
Google Spotlight Stories recently expanded to iOS. These are short, 360-degree films (most animated, but now one live-action) that first launched on Moto phones. Users have complete control over the camera angle — you can spin a full 360-degree sphere to watch the action, though there tend to be one or two directions where the action is happening, so it’s mostly a following kind of experience.
What do these have to do with computer vision experiences? While these use only the built-in inertial sensors to do orientation tracking, the concept could be taken further with spatial tracking using SLAM or another technique. In that case you could walk around and look behind things, etc. in the way that fully-tracked VR lets you do. (And in fact Google has a Google Cardboard viewer for these as well, but it still does orientation only.)
Unlike some other “you control the camera” experiences (like, say, sports TV), these are convincing as a new type of immediate and engaging experience, and not just a novelty or like they couldn’t be bothered to hire a cameraman/director.
These shorts are also teaching people the “magic window” metaphor for augmented and virtual reality experiences. Aside from when taking photos, regular people aren’t used to the idea of moving your phone or tablet to view games or media from different angles. (As I’ve seen in studies, if you present an app like that with no instructions or strong cues, people just won’t move.) So these are good design examples for people doing mobile AR in particular.
Source: Google’s immersive Spotlight Stories arrive on the iPhone and iPad
Nice summary of trends in computer vision research: Deep down the rabbit hole: CVPR 2015 and beyond (Tombone’s computer vision blog)