An Ethical Horizon for Artificial Intelligence: TRULY FAIR INFORMATION PRACTICE (Part 3/4)
PART 3
One of the problems continuing to aggravate consumers using algorithmic artificial intelligence today is the lack of fairness in information practice. Many people aware of adware, spyware and the blurred lines of malware for commercial or government surveillance purposes would say they are being used by the AI, rather than conforming to an agreement. The lack of negotiable terms of service as a contract business practice is overdue for market led reforms.
Business, as a general rule, experience some concern for competition. So consumers can reserve a certain right to negotiate. For the better part of 20 years, telecommunications and tech service providers have presented the consumer with a non-negotiable one-way street while enjoying immunity of civil liability for the most part. The consumer can leave the table or conform to the contract in front them.
Users who are amenable to marketing more information may be able to negotiate increasingly better deals proportional to the exchange. Notice and consent as negotiable clauses can escalate better trade if you need the consumer. If companies can resolve points of negotiation over what may end up being highly intimate data points, they might get better ROI from a few highly valuable research subjects. If a company develops UX for a new TOS agreement where consumers actually have a choice in what information they deliver to the market it will be robust and it will be fair.
But what happens when one party loses interest? Do you still have a deal? In data retentive businesses, like Google, and social media properties, like MySpace, consumers cannot meaningfully withdraw consent for the use of their intellectual property or their data. They refuse US consumers access to data they rightfully own. Then they license the data to whomever makes sense to them.
Lawyers for companies have begun telling cloud brokers the companies own the data, not the consumers who produce it. It seems very tough for them to comply with consumer requests for access to their own data unless there is a lawyer present. Today, Google lost to Europe over the right to be forgotten and now fights State of Illinois to beat back biometric privacy protections. AI prospects are increasingly eyed with rightful suspicion because of the near total loss of consumer control over data they own going to market now.
It further complicates things when the government uses the same information services. If a behavioural advertising firm is also a government contractor the consumer faces a really unfair set of circumstances they never had any authentic role in consenting to begin with. A different set of legal mandates are set in motion. Access rights promised in the original contract are undeliverable because they changed hands by force or by contract to launder the information.
As information security risk to businesses increases, the risk is passed directly to the consumer. If you go in on an agreement with a technology service where you absorb more personal risk than 3 years ago, you should have some leverage in deal making.
One of the pain points of AI position aggregates is a lack of notice of what a company wants takes to market about you and choice about whether or not they can take it to market if it gets personal. De-identifying or anonymizing or encrypting information taken to market would ameliorate or level the marketplace so all parties eliminate risk and have an ethical business exchange baseline.
So... What are your Terms of Consent?
###
(COMING SOON: An Ethical Horizon for Artificial Intelligence (Part 4/4): Propriety & Accountability)