While that may be a great way to sell more candy bars, it may not be so good for business strategy.
This week’s bits are a little more gloomy than usual. AI is susceptible to hacking; Blockchain is a little bogus; Algorithms drive things towards mediocrity. The tutorials are thicker but more uplifting.
And still, there is a void where the discussion of ethics should be. From here, the ethics questions look like:
- Bias: understanding the technology’s implicit bias, managing the effect of real-time data
- Privacy: building as if people owned their data
- Interface Design: moving from clarity to conversation
- Dehumanization: the impact of being treated like an algorithm
- Shallow Models: we can’t predict the behavior of bugs, let alone people or organizations
- Accuracy/Ability to Correct: when data is ‘public’, its owners may not have adequate control
- Decision Speed: faster data requires slower thinking
- Hype-Reality Disconnect: be careful what you believe…it isn’t really intelligence
- Domain expertise: models need more expert input
- Liability: who’s responsible for the mistakes
- Causation/Correlation: AI output is proababalistic
- Data Literacy: understanding data sources and quality is an essential part of evaluating AI output
- Appropriate Use: where is the use of AI warranted? where should it be avoided?
If you see something we should be covering, let me know.
- Why Artificial Intelligence Researchers Should Be More Paranoid. Short piece from Wired on the importance of paying attention to ‘hackability’. Imagine a slowly unfolding rogue takeover of a company’s senior management ranks driven by a hack of the HR system. Imagine a competitor rearranging the performance management scores. Imagine a tool that slices small bits out of the merit increase pool.
- Style Is an Algorithm: No one is original anymore, not even you. “Amazon’s Echo Look, currently available by invitation only but also on eBay, allows you to take hands-free selfies and evaluate your fashion choices. “Now Alexa helps you look your best,” the product description promises. Stand in front of the camera, take photos of two different outfits with the Echo Look, and then select the best ones on your phone’s Echo Look app. Within about a minute, Alexa will tell you which set of clothes looks better, processed by style-analyzing algorithms and some assistance from humans.”
- The blockchain is not only crappy technology but a bad vision for the future. Read this before you jump on the ‘all credentials should be permanently housed in a blockchain application’ bandwagon.
- The U.S. is way behind other nations on workers’ readiness for jobs of the future, report says. From the LA Times. The difference between industrial winners and losers will be their emphasis on keeping the workforce current.
- The Workplace is Killing People and No One Cares. AI will start to be really useful when it is focused on measuring and reducing workplace caused chronic illness.
- “WTH does a neural network even learn?” — A newcomer’s dilemma. Long but worthy. If you want to understand what really happens inside a neural network, patiently wade through this. It will take about 20 minutes of work.
- The Origins of Artificial Intelligence. A worthy read by a pioneer. Good reminders of how far we have yet to go. “we can still not computationally simulate the behavior of the simplest creature that has been studied at length. That is the tiny worm C. elegans, which has 959 cells total of which 302 are neurons. We know its complete connectome (and even its 56 glial cells), but still we can’t simulate how they produce much of its behaviors.”
Quote of the Week
“So what, exactly, did the algorithm “learn” about the process of dying? And what, in turn, can it teach oncologists? Here is the strange rub of such a deep learning system: It learns, but it cannot tell us why it has learned; it assigns probabilities, but it cannot easily express the reasoning behind the assignment. Like a child who learns to ride a bicycle by trial and error and, asked to articulate the rules that enable bicycle riding, simply shrugs her shoulders and sails away, the algorithm looks vacantly at us when we ask, “Why?” It is, like death, another black box.”
Curate means a variety of things: from the work of vicar entrusted with the care of souls to that of an exhibit designer responsible for clarity and meaning. At the core, it means something about the importance of empathy in organization. HRIntelligencer is an update on the comings and goings in the Human Resource experiment with Artificial Intelligence, Digital Employees, Algorithms, Machine Learning, Big Data and all of that stuff. We present a few critical links with some explanation. The goal is to give you a way to surf the rapidly evolving field without drowning in information. We offer a timeless curation of the intersection of HR and the machines that serve it. We curate the emergence of Machine Led Decision Making in HR.