What's AI-Right and AI-Wrong?

Create a vendor selection project & run comparison reports
Click to express your interest in this report
Indication of coverage against your requirements
A subscription is required to activate this feature. Contact us for more info.
Celent have reviewed this profile and believe it to be accurate.
17 May 2021
Marty Ellingsworth

Facts are stranger than fiction - Ethics are not even facts

What’s AI-right and AI-wrong? And can people even know the answer?

Having data? Seeing data? Knowing data? Using data? Not using data? Not knowing, seeing, having?

This blog is a DIY Data and AI Ethics Exam – funny thing with ethics, there may be no right answer. It’s a framework for considering right and wrong, asking questions, and changing in changing times.

Many of us have never to this day actually taken a course or read a book on ethics. If you have to ask, you are one of them - no judgement, just a data point.

There is no certificate of accomplishment for ethics literacy, nor an update for when things change (or more emphatically - need to change). There are many who would argue that there are professional educational credits for such, but without an enforcement mechanism for accountability, it looks more like ethics washing to some.

The truth is stranger than fiction as markets around the globe and in every industry and situational context continue to sort out “yours, mine, ours, and theirs” issues on data ownership, privacy, protection, permission, uses, and what does “fair” mean? Rules, regs, and laws are coming (or not) depending on the politics, economics, and ethics in seemingly every context and jurisdiction.

It’s not just what people do to people with data, but the scary obviousness of what machines could do (or not do) to people with data. The scale of impact of platforms is in the billions of people. One autocratic leader can impact only the people below them, but a machine can network ceaselessly.

This sits in modern day mindsets -- corporate, government, municipal, local, household, individual. One-part Fear, and One-part Greed. And the rest of the parts – ethical and moral debate.

LET’S START THE QUIZ – formulate your opinion on these topical sentences below:

Privacy – permanent record in your Car Navigation System? Who owns that data if sold or salvaged? Who can view that data? Same thing for your smartphone, smart home, smart watch, social media account/content, etc.

“Extra” data – perhaps facial recognition is very useful for some things (like my iPhone unlocking), but is someone databasing biometrics, my race, ethnicity, gender, gender identity, orientation, religion, age, complexion, etc. Is storing this sort of data anywhere “EVER” a good idea? What if it's just to suggest my color palette in clothing?

Anonymity versus Identifiability - Show your face - to access your phone, your apps, even the internet (already included if you did step one, except in some countries). If you don’t use your face or fingerprint, your GPS coordinates and movements and/or device handling characteristics are strong identifiers. Same with small networks, in a small area, or similar interest – the three combined can be as unique as a roof top level 9-digit zip code or a ten-digit phone number. Is it not PII - “Personally Identifiable Information” - just because it is not easy to see, or link, a unique identity? [not part of this test - that's written down, but changes over time].

Real or Robot - Am I speaking with a synthetic AI? Should I be notified? Should a Robot voice be ‘gender-fied’?

Hidden Influence - What is the motivation behind gender-fying a robot, or a spokesperson for that matter, even a cartoon spokes-animation?

I want to be included - You built a great model on people not like me, so now what? Can I fit?

I want to be excluded - You thought you built a great model on people like me, but I am not like them.

Forget you ever saw me – Can your model forget me, or once seen forever seen – like a jury. Did you warn me of that (and is all that EULA fine print even fair – ok, that’s an advanced exam question)?

I did not give you permission to see me – are there any bounds on unintended disclosure, capture or use of data about me (including public gatherings perhaps?).

Speaking of crowds - Tell me what my friends and neighbors are doing - I heard that all my friends and connections acted on this issue (like voting) – should I take the same action too, Wait, is that fake news? Am I being manipulated? Houses on my street used how much less water than me?

Means to an end - If a “Pied Piper” existed in an algorithm, would it be okay if they helped me make ‘good choices’ or should I always have informed consent and know I am the target of an influence campaign, nudge, or advertisement? Are techniques of psychological warfare fair in peacetime populations?

Past versus Present data - All your model data uses past outcomes that came from a society where segregation was the norm, education was spotty, literacy even spottier, and legal protections had not yet come into play – why is it even relevant? Predicting from events in a biased past carries forward the inherent market conduct.

Moral Line – no AI ‘live ammo’ authorized (at least, by us, so far, but it’s a line to not cross, right? Or could I just use it for automated vermin hunting, where’s the harm in that?)

Might makes right - Is it giving too much away to the ‘invisible hand of power’ for someone to infer my wealth, cashflow, purchase preferences, and affinities – and then decide how to target, un-target, or re-target me? Is it different if it is my neighbor down the street versus an algorithm running in a foreign country?

Because I said so - People, especially bullying and self-servicing autocrats, create cultures of situational ethics where 'truth to power' equals termination with prejudice. Training an AI model on human decisions can create a 'monster in the machine'. You can give away your ethical objectivity by using subjective or biased data in the first place.

Companies are people too - Speaking of ‘giving stuff away’, hypothetical question, if I web scrape your picture, calculate your biometrics, and database them, do you have any rights to that data, or even to know I exist? Even if I only intend to use it for 'good' - like to verify your identity when you ask me to? What if I were bad?

No harm, no foul (can no foul be the harm) - How many of the offers I never see, would have made me happy? Is there a virtual tarnish to reputation in the job market? Is “too expensive” or “over-qualified” even a thing, or a code word for age? What other hidden factors are being used in hidden decisions [ see “Extra Data” above ]?

I know it when I see it – People have hidden biases, and also hide their biases. Would I know if an algorithm “thought” like that, or used data sources that had hidden influence? How are outcomes catalogued to show any differences by any type of “Extra Data” features?

The rule of law - The problem with laws is that it’s not illegal if there is not a law, and it is forever illegal until a law is changed. Selective enforcement can be an issue too. Ethics can be like laws, especially when money, big money, could be at stake – but with nothing written down, it is way easier to rationalize an ethics violation, less penalty potential.

Gold is the rule - Despite my observing you not drive, I am still charging you a flat rate of 10,000 miles a year. (substitute any subscription service here).

The rule is the gold - If I don't offer a mileage risk curve, then it does not matter if you drive less, but I will audit anyone if think is driving way too much.

The cost of gold - Even if I verify you are a safe driver and drive less, you must continue to pay for trip tracing and submit to tracking beyond its rating relevancy even if a dramatically cheaper and privacy protection option of photo capturing just miles from your odometer would do.

The value of gold - As long as I hope to find new ways to create new products and services to sell to you sometime in the future, then you should enjoy paying more now and having your private data permanently collected.

Does having knowledge carry any obligation - If an AI audit-bot could assess bias violations in models and people, what happens to the people, the models, and people using models?

END OF EXAM

BONUS QUESTION Underlying all this, there is a concern that for some jobs, “people need not apply”

Algorithm Want Ad: Seeking a algo-work-bot that never gets tired, does what I tell it, draws no salary, does not ask for a raise, never takes vacation, takes up no office space, has unlimited productivity potential, can create valuable data assets from its own work processes, and has the capacity to learn with bolt on knowledge modules instantly. Plus, creates a permanent record of everything it does, and won’t complain if I just turn it off and never call it again.

And did I mention, AI never asks questions, but if it did, there's no law or compelling ethical reason to make you answer.

Marty – mellingsworth@celent.com

Insight details

Industry
Life & Health Insurance, Property & Casualty Insurance, Retail Banking
Subscription(s) required to access this Insight:
Banking, Retail & Business Banking, Corporate Banking, Insurance, Life/Annuities Insurance, Property / Casualty Insurance, Securities & Investments, Wealth & Asset Management, Capital Markets, Markets & Trading, Risk & Compliance, Financial Risk, Operational Risk
Insight Format
Blogs
Geographic Focus
Asia-Pacific, EMEA, LATAM, North America