Personality Retention

That is not a photograph of me. It’s not even a photograph.

The hair isn’t the style I’ll be using. And the neutral expression isn’t quite right. But other than that, you are the first people in the world to meet Peter 2.0. This is a frozen image of the avatar we’re building – based on me three years ago, before I lost any muscle.

As you can see, I’m not envisioning something that looks like a cartoon, I’m envisioning something that every instinct tells you is a real human being – ME. I’m envisioning hardware and software that can deliver that in real-time. And if initially we can’t quite reach photorealism in real-time, then at least from the very beginning we can achieve it for pre-prepared sequences.

I envision tight coupling of natural facial movements with voice synthesis as well as AI-generated facial body-language based not only on listening to ongoing conversations, and things like sudden noises, but also from watching what is going on, detecting and interpreting movements, etc.

I envision giving a keynote speech where my avatar is shown on the auditorium screen (not an image of my almost-paralysed speechless body), or a Skype call or a podcast where my Peter 2.0 avatar is all that people see, or even holding a face-to-face conversation with someone who ends up interacting with my Peter 2.0 avatar software not my Peter 1.0 wetware at all.

Even by the end of 2019, I will unbelievably have crossed an invisible frontier: the real me, my truest persona, the only window you’ll have into who I really am, will be completely artificial. As people look at my biological body today, they are looking at a prototype.

Oh, and of course, my avatar will never age…

In fact, I envision that we eventually get the AI on all this so good that one day I’ll unexpectedly die – and no one will notice. After a couple of days, someone may ask me “Can you smell something?” and my avatar (perfectly truthfully) will answer: “Can’t smell a thing.”