Technology At Work: New Challenges

Technology At Work: New Challenges

Managers looking for a quick-fix that reduces the need for human employees are indulging in wishful thinking. And HR managers should point this out at every opportunity.


During the ongoing pandemic, it has been reported in the media that companies will extensively switch to work-from-home solutions and computer-based operations using AI, to counter the risks of infection to employees and the expense of social distancing at work. While working from home has its own limitations, discussed earlier in this column, machine-learning based AI, the most common variety, poses even bigger challenges.


A couple of years ago, an autonomous Uber car controlled by Artificial Intelligence (AI), ran over and killed a lady walking across a poorly-lit road with a bicycle loaded with plastic bags. The following detail is important. The car identified the combination of ladybicycle-bags as an unknown object, because this combination had not been presented to the AI software when it was ‘trained’ with images and videos of what it might encounter on the roads. It did not know how to react. On coming closer, it recognised the bicycle. It had been trained to expect a bicycle to move out of the way at a certain speed so it slowed down to avoid the collision. It had not been trained to deal with a bicycle that was being pulled along by a walking human being, so this speed reduction proved to be insufficient. Just before the collision, the car identified the human being behind the pile of bags. By then, it was too late.


Common AI software, of the machine learning sort, is ‘trained’ by presenting it with patterns of inputs that it is expected to handle, with classifications or labels, so that in the future, when working autonomously, it can recognise similar patterns and act appropriately. Depending on the intended application, these training inputs could consist of the winning and losing positions of pieces on a chessboard, X-Ray images of lungs of healthy and unhealthy people, or resumes of suitable and unsuitable candidates for a vacancy. In other words, the quality of the software depends entirely on the quantity, variety, and the labelling of the training inputs. Once trained, it can work round the clock at great speed, and not make any mistakes induced by fatigue or inattention.



When human beings compile training inputs for AI, they go by the records of past situations and how they were handled. This way they transfer their biases without necessarily meaning to do so.



For instance, in the case of resumes being sifted to predict success at work, the AI engine will throw up male candidates from certain regions and linguistic groups, if only such people have been doing well in the past in that organisation. The best of female candidates will be ranked lower because their resumes do not match the current templates of success in the company. The only way this can be corrected is by providing constructed hypothetical resumes of ‘successful’ candidates drawn from other groups not currently present in the company. This immediately raises the complexity and the cost of the training phase.


The current pandemic has exposed another weakness of machine-learning based AI, trained on inputs drawn from routine everyday work. For instance, in the US, there was an abnormally high volume of demand for certain products like toilet-paper rolls, hand sanitizers, masks etc. The AI systems did not know what to do with them, and either flagged them as errors or ended up disrupting the predictive inventory management system. Human beings had to intervene to set matters right. Indian companies have been adopting AI in some business operations and are facing similar challenges. A supplier of sauces and condiments faced similar problems here while handling the demand surge at the start of the pandemic.


Yet another challenge comes from unintended inclusion of items in the training inputs. An AI system trained to recognise cancerous lesions in lung X-Ray images was found to behave erratically, ‘clearing’ obviously ill patients. On analysis, it was found that in the XRay scans used to train it, most of the images of cancerous lungs had come from a particular hospital whose XRay machine placed a mark in the corner, identifying the company that had made the scanner. The AI software had learnt to associate that mark with cancerous lungs. If this mark was absent, the image was classified as one from a healthy lung.


Adopting AI requires skilled and expensive workforce and large and expensive computing resources. That in itself is a deterrent to adoption. Now that other limitations are getting exposed, the corrections will require additional effort and expense. As the abnormal consumer behaviour during the pandemic has shown, no amount of training can truly ensure that the AI software will respond appropriately to unusual situations, like when a person buying one bottle of sanitizer every month suddenly places an order for twenty bottles.


The dependence on complex technology solutions also increases risks of disruptions from malicious hackers in many ways. One very popular route is through ransomware which denies access to the organisation’s computing resources unless an amount of money is paid to the attackers. This has significant costs as even a casual survey of literature on past attacks will confirm. Also, the costs made public are just the tip of the iceberg. Most companies try to play down the impact of data theft and the amount of money paid to ransomware attackers. This is done to prevent loss of confidence in their consumers and investors. In just a three-day period during the end of June, NHAI was hit by ransomware in India as was the University of California in the US. It is not known what the attack cost NHAI, but the university ended up paying more than a million dollars to free its computers.


Since AI solutions involve significant expense, companies will draw the attention of hackers who will make AI deployment a basis for estimating the capacity to pay a ransom. Preventing such attacks comes at a cost because it involves hiring skilled staff and signing up support with expensive service providers. With a larger-thanbefore share of employees working from home, where their systems are physically accessible to others and where the Wi-Fi environment may not be as secure as inside a company premises, the challenge of malware prevention multiplies manifold.


Does this mean that technology deployment will pause during or after the pandemic? Certainly not. It does mean though that managers looking for a quick-fix that reduces the need for human employees are indulging in wishful thinking. HR managers should point this out at every opportunity. There is no getting away from right hiring, fair compensation, regular training, and thoughtful care for building a community of engaged and productive employees.




1. “Our Weird Behavior During The Pandemic Is Messing With AI Models”; Will Douglas Heaven; Technology Review (MIT; May 11, 2020

2. “California University Paid $1.14 Million After Ransomware Attack”; Kartikay Mehrotra; Bloomberg Cybersecurity; June 27, 2020

3. “AI Techniques In Medical Imaging May Lead To Incorrect Diagnosis”; University of Cambridge; Science Daily; May 12, 2020

4. “Uber’s Self-driving Car Saw The Woman It Killed”; A. Marshall & A. Davies; Wired; May 24, 2018

5. “Cyber Attack on NHAI Email Servers, No Data Loss”; PTI; Economic Times; June 30, 2020



Follow and connect with us on LinkedInFacebookInstagramTwitter for latest HR news and insights.



Gautam Brahma is a management consultant who advises start-ups and SMEs on strategy & operations including sales, HR and IT. He carries an experience of over four decades in the public, private and non-profit sectors in telecommunications and IT industries. He has been an invited speaker on multiple industry forums and a monthly columnist on HR issues for nearly two decades. Gautam is based out of Gurgaon and can be reached at


0/3000 Free Article Left >Subscribe