
Add your company website/link
to this blog page for only $40 Purchase now!
ContinueAs everyone involved with AI development agrees, AI poses serious and potentially existential risks - both known and unknown. To mitigate those risks, tech giants have established divisions or oversight groups dedicated to anticipating potential misuse and helping guide new safety features. Microsoft recently laid off their Ethics and Society Team from their AI Sector, according to Platformer's reports. While the company still maintains an Office for Responsible AI, many have raised doubts as to its efficacy. What Caused the Layoffs? At a time of widespread public mistrust in big tech, companies have taken steps to safeguard users and communities. Many now employ teams dedicated to responsible AI development when building new tools or features; but as layoffs have spread through the industry in recent months, such teams have either been cut back or completely disbanded -- an indication that profit may come before ethics when making product decisions. Microsoft recently made an especially significant change by disbanding their entire ethics and society team during a mass layoff of 10,000 employees. Although Microsoft still employs an Office of Responsible AI team to draft guidelines and policies, ethics and society staff was responsible for making sure these rules were being observed during product design as well as creating dialogue between AI teams at Microsoft and those using their technologies. With only seven members remaining before its axing, sources told Platformer they felt Microsoft was more concerned about getting its tools into customer hands than addressing potential harms that might occur with AI tools compared with addressing potential harms which might arise as part of addressing risks that might exist due to this shift. Twitter recently made headlines when it announced it had fired an entire team tasked with making its algorithms fairer and more transparent, led by Rumman Chowdhury - an esteemed algorithmic ethics specialist. These workers had apparently felt pressure from company leadership to deliver faster AI solutions faster, and were frustrated with delays. At the same time, several large tech companies have reduced the teams responsible for monitoring how customers are using their AI tools. This trend has become an increasing source of concern over time as more firms rush to develop virtual and augmented reality tools which have the potential to displace a wide array of jobs - an alarming development according to recent report by World Economic Forum (WEF), who warned of AI-induced disruption causing global unemployment spikes particularly among lower income communities who can't afford retraining for other industries. Amazon Layoffs can be devastating experiences, and when artificial intelligence (AI) software plays a part, that frustration increases exponentially. HR managers have used machine learning tools for years to assess employees' performance; and the prospect of AI being used as the final judge could not be more realistic, according to The Washington Post. AI can also be used to identify attrition risk among existing workers, meaning it knows which employees are more likely to leave than others--sometimes before even receiving notice of layoff. HR leaders should take the time to double-check all available data before making their final decision. One of the more contentious applications for AI is its potential to replace human jobs, particularly in ecommerce and fulfillment centers. According to a Goldman Sachs report, over 300 million jobs worldwide could potentially be affected by automation and artificial intelligence technologies. As AI technology develops, it's expected to replace many entry-level jobs such as ticket clerks, cashiers and data entry workers. Furthermore, AI could replace middle-skilled roles like customer service as well as professional roles like architects and doctors. Microsoft and Twitter layoffs may have been caused by pressure from investors, but the cuts to AI ethics teams is particularly alarming. These teams are responsible for identifying potential risks and harms associated with AI products like chatbots like OpenAI's ChatGPT as part of product design meetings with product designers and CEOs; their disappearance through restructuring like Google's firing of ethical AI researcher Timnit Gebru in 2020 shows that tech companies prioritize profits over how these technologies impact real lives. Though layoff rumors related to AI may be exaggerated, they do demonstrate the extent to which companies will go in order to meet Wall Street demands for efficiency. Although Twitter and Microsoft underwent major reorganizations processes unrelated to AI specifically, cuts to teams responsible for trust and safety and AI ethics suggest those departments have become cost centers. Microsoft Microsoft's recent decision to downsize their responsible AI team raises serious doubts about their commitment to developing technologies in a way that is socially and environmentally responsible and sustainable. This team was responsible for assessing potential risks of new technologies before product development started; their ranks had decreased from 30 in 2020 to seven after being reduced during reorganization; employees of Platformer reported believing this decision had come down due to pressure from CEO and CIO as soon as possible for getting AI tools out in customer hands quickly. Microsoft is in a race to integrate its controversial generative AI technology - which uses existing content without permission - across its product portfolio. Furthermore, the company is working to embed new ML technologies such as language models and deep reinforcement learning that could potentially cause ethical concerns in many forms of use. Furthermore, Microsoft has invested in companies like OpenAI that produce such tools that have already caused concerns over bias and misinformation within chatbots. Tech companies typically employ teams dedicated to ensuring that their products are secure, ethical and user-friendly. These groups help product teams anticipate issues before they arise and implement changes before these problems have an impactful effect. Unfortunately, their needs often conflict with those who must deliver products quickly in a competitive race against Google. Twitter recently laid off employees who focused on security and privacy issues, while Amazon-owned Twitch has laid off staff working on responsible AI initiatives and trust and safety initiatives. These moves indicate that even some of the industry's most-renowned firms are grappling with finding an equilibrium between long-term ethical goals and short-term competition in an ever-evolving market. No matter the source, AI layoffs should serve as a wakeup call to all parties involved with developing this emerging technology. Their impact could be monumentally detrimental in countries struggling with poverty and inequality; thus it is imperative that governments, businesses, and the tech industry come together to ensure AI benefits reach everyone rather than only benefiting those who can afford them. Twitter Tech firms frequently boast of their commitment to responsible AI development while simultaneously prioritizing profits. When budgets decrease, companies may become tempted to cut teams that examine the ethical and social implications of AI development; this has happened at Facebook-parent Meta, Amazon AMZN and Twitter TWTR; in fact, Twitter eliminated their entire AI ethics team after Elon Musk took control of CEO duties last year - former employees have shared their shock at this development. Platformer reports that Microsoft's ethics and society team numbered about 30 employees prior to undergoing a major reorganization last year, before being reduced to seven people due to governance pressures. John Montgomery, Microsoft's corporate vice president for AI stated in one meeting following this reorganization meeting with members of this team that there was "immense pressure from governance" to get its new AI models and their next versions into customer's hands as quickly as possible. Members of the team believed that leadership had made the decision to dismiss them because they seemed more intent on getting products to market than on making sure they were ethical. Critics are concerned with an influx of AI-powered tools used for automating human tasks; many fear these robots may replace workers and cause new forms of inequality. AI can sometimes make biased decisions that lead to unfair treatment or discrimination, especially if its algorithms are trained on data that contains bias or was collected improperly. As evidenced by TIME investigation findings regarding OpenAI chatbots developed by Google-funded artificial intelligence company OpenAI being biased against women and minorities. As these technologies advance, their effect will likely be profound. Big tech's only defense against this impact lies in creating responsible AI teams - but this approach might become difficult as companies that rely on such tools reduce investments while simultaneously cutting staff members to save money.