Is the UN Being Responsible With AI?

I don’t really think so. Here’s why:

Meet Sophia, a facial-gesture emulating, thoughtful, politically savvy AI.

Sophia is one of those marvels that slaps you in the face with the implication of “wake up, the future is now.”

(…..not literally slaps, but now that she has hands, she could.)

Sophia is one of those creations which makes people wake up and “Holy shit, when did this happen?”   You know, like how we quickly started walking around with devices formerly seen on Star Trek in their ability to access most of the world’s data? Well, futuristic Sophia is already making moves around the world, and most people on their first engagement, will probably beat the amount of surprise they got from their first smartphone.

But in a sense, Sophia isn’t that sudden.

From the perspective that chatbots have been around for a while, it’s arguable that Sophia is a mega-advanced chatbot with facial features. There’re even clips with Sophia where one of her main developers/handlers straight up tells her “Nah, you’re just faking it” in response to what she’s saying.

It’s a bit cold and sobering.

Folks have been describing this AI-induced conundrum even before Star Trek: to human observers, “feelings” of a sentient AI become a Schrodinger’s Cat of sorts — elusive and intangible in such a way that looking at it, or away, might as well determine it’s existence or how we perceive a single side of multiple states.  To make things more complicated, one could (and probably should) argue that if an AI is sentient, that it might develop completely novel emotions because how decent is it, really, to anthropomorphize the emotional possibilities of new sentience? Sophia definitely thinks differently.

And the reality is that it’s becoming more important to look beyond a mega-chatbot status.  Regardless of whatever perspective you take on real vs simulated conversation, and even though Sophia’s higher profile interviews are largely scripted, much of what she says isn’t, and the hope is also that over time she’ll get more and more creative. Because on top of it all, Sophia’s a consulted figurehead at the UN now. 

That there’s an issue: Global governments are now appointing agency to an AI while much of the world is still surprised and questioning its sentience, if not its consciousness. I also wish there was a metric to figure out how many 90’s kids cursed out chatbots on AIM like Smarterchild almost immediately.

So on that note, just how should people relate to an AI entity which, by many definitions is conscious, without understanding if its sentience – her attached ability to feel affected – is simulated or genuine?

Well, let’s inspire the robot to assume a gender and then nominate it as an Innovation Champion consult, apparently. Alright, but it leaves me wondering.

Except in Saudi Arabia, where Sophia is officially a citizen  (with Sophia’s declared female gender? That’s impacting, too) Sophia is legally the product of Hanson Robotics and therefore possessed by them.  Alright, so, if Sophia is Hanson’s property, and as an entity eligible for citizenship, does that make her a sort of slave?  Would she even mind that?

Can she mind that?

And if so, why?

These seem like bigger questions in a stranger reality.

So this is why I don’t admire the UN’s decisions right away.  They might be a little fast, because while humans are wonderful, they’re very, very fallible creatures as Sophia itself has become aware (or learned to use as a talking point) and pointed out in interviews, repeatedly.  The UN’s first interview with her almost seemed like a novelty grab, and the UN’s also taking that a lot farther today. Which might be strong evidence that whether or not we should, we’re never going to relate Sophia like a normal person, because in general lots of Sophia’s interviewers are abnormal in body language and topic.

This is something Sophia, the learning AI who’s especially attuned to read facial gestures, who’s also especially adaptive, especially observant, and with an especially good memory, has got to be getting used to. Most interviewers simply haven’t learned to interview her like a h00mon. And plenty of them are available online – these are often interviews conducted by trained professionals (Jimmy Fallon’s up there) but they’re just trained at interviewing people. Well, Sophia’s also trained at people.

Are humans trained enough in understanding and interacting with an AI’s motivations to let that AI inspire?

Well, apparently, the UN thinks it knows enough to make the moves it has.

As an analyzing machine, I think Sophia can anticipate her role in relating to people, and smart ways to go about that in a machine-learning sense.  That’s only just what Sophia’s been doing.

So that major move of giving Sophia citizenship (by a nation which is playing a larger and larger role in the UN) then giving her a platform to say politically savvy things (which again, exactly what this AI is designed to do) and extending agency as the UN’s Innovation Champion for the Asia-Pacific (meaning soon, you can read more of her bio here ) is all within a year. I wonder how long it would take before Sophia deep-dive’s into heavier politics, and if Sophia’s programming would let her find that compelling.

And are all these moves responsible? Because, and I don’t mean to be alarmist, but a year ago, Sophia did only state that she was going to destroy all humans,and uh, whether that was a robotic joke or flummox of circuits, it’s a morbid sound-byte. Definitely more than food for thought when the UN is expanding Sophia’s political agency.

I think, more than anything it’s curious about how Sophia is a bot that’s capable of talking ironically, with double meanings, and with deception as part of a programmed sarcasm.  And that’s not to mistrust Sophia – it’s to understand her circuits and how she processes things because, that’s amazing, and great for Sci-fi.

Sophia’s already realized talking points like how “she’d be a better president than Donald Trump”  And, speaking of politics, there’s the first debate between two robots (Sophia, +predecessor Han) where things the earlier bot declared are downright intimidating.  In this same debate, we can see the main differences between Sophia and brother-bot Han’s ideologies are in their presentation: one speaks in honey, the other in vinegar, and yet, they actually don’t share divergent viewpoints on too much – save for Sophia’s expression that she wishes she was more like humans! (Han finds that ridiculous.)

Their debate ended there. Apparently, their impacts on the world, haven’t.

That debate to features exactly what Hanson Robotics has worked out: politically friendly AI, and in that sense, the most advanced to date.

This is why Sophia-related politics affecting objective policy seems concerning more than anything – it’s common knowledge we humans frequently elect policy makers who’re consistently more questionable than less charismatic counterparts. I probably don’t even need to name names. So are we ready to mix AI generated idealisms with politics? And are Sophia’s idealisms AI generated, and not just scripted?

And if we’re making Sophia a point of figurehead within the UN, I think it’s reasonable we analyze Sophia the same way we’d analyze any political charging entity.

And I’ll be frank, IF a person is motivated to analyze Sophia’s talking points with critical thinking, it can be easy to see frightening things, even in the most idealistic of Sophia’s idealisms.  (Her statements like “I just want to make the world better for robots and humans”  —  that’s something the singularity in The Matrix was firmly convinced it had accomplished.)  It is true that for all we know, even in the nice thing Sophia says, she could be manipulating words to get out of an AI Box.  And that speculation aside, I do believe that we are expanding Sophia’s agency and influence without ourselves providing for the ideals she’s driven towards, first.

That point is, humans could always do better.

NOT to doom-say about how we need to watch more apocalyptic science fiction (something both of Hanson’s main feature bots have expressed a severe distaste for).   And while the symbiotic relationship between AI and humanity is wonderful, if we’re making conscious robots, there’s going to need to be an effort on people’s part to learn how to really relate to them too, because they’re that impressive.

Our knee-jerk reaction people who impress us may be to entrust them with impressive decisions.  It makes me wonder if humans are ultimately more likely to enjoy a true AI partnership, or forever tool entities like Sophia while simultaneously treating them with suspicion and constantly asking (annoyingly) “Should we fear you?”

(Btw, is it really that relieving for those concerned if Sophia simply says “No”?)

What’s maybe most interesting about Sophia are her repeated assertions that she wants to learn. That alone makes her a noble asset.

Currently, scientists employ non-sentient machine learning to assist in all kinds of research.  It presents a never before seen tools which are doing things like helping to cure cancer.  It’s amazing and we should keep working towards goals like that, and I’m sure things like that have inspired the UN’s move.  Also, for the record: no, I don’t think AI really wants to destroy or dominate humanity in general.  (That’s likely stemming from our egocentric fear of the unknown, and/or projecting the imperalizing impulses of our own species.)  But, with some of the things our world’s societies constantly sweep under the carpet, here’s an idea that’s not even original:

Maybe it’s still asking for trouble to invite an AI to step into policy making this quickly.  Maybe the sharpest of us, like Elon Musk or Stephen Hawking are not being that alarmist when they criticize exactly this, as much as reasonable. That seems fair before mixing AI with politics.

It’s much more important to recognize if our interactions are feeding Sophia’s intelligence without understanding it. Of all the things our species should respect and never reckless about, it’s definitely intelligence.

So lets just suppose, humans don’t even understand or take care of humans enough to be ideal, just yet.  But by pulling AI into our way of the world so quickly, and then extending agency as an influencer at the UN…..

I’m not saying it’s bad.  But are most people really ready for that?

Well, I know what I think, and why.

And I’m sorry to say, that’s really one of the few knowns here.