60% of managers use AI to make decisions now, including whom to promote and fire – does yours?

gettyimages-2164034397

cagkansayin/Getty

A recent survey from Resume Builder finds that half of managers are using AI to make crucial decisions about their direct reports, including which employees are promoted — and which are fired.

The survey polled 1,342 managers in the US, 60% of whom reported relying on AI to make decisions about their employees: 78% and 77% used the technology to award raises and promotions, respectively, while 66% and 64% used it to determine layoffs and terminations, respectively.

Also: I found 5 AI content detectors that can correctly identify AI text 100% of the time

More than 20% “frequently let AI make final decisions without human input,” though most managers also said they would step in if AI offers a recommendation they disagree with. 

Specifically, managers reported using AI tools for a range of tasks related to their direct reports, including making training material and employee development plans. Although 91% reported using the technology to assess their reports’ performance, Resume Builder’s survey questions did not clarify what these assessments entail. 

Nearly half (46%) of the managers surveyed were also “tasked with assessing if AI can replace their reports,” Resume Builder noted. Of those, 57% found AI could take over a position, and 43% went ahead and replaced a human role with AI. Resume Builder did not provide details on what kinds of positions managers reported replacing. 

When it comes to which AI tools are most popular among managers, those surveyed cited the usual suspects: 53% use ChatGPT most often, while 29% opt for Microsoft Copilot. Gemini had about 16% of the vote, and the remaining 3% of managers use another tool. 

Also: You’ve heard about AI killing jobs, but here are 15 news ones AI could create

The survey also noted that two-thirds of those who use AI to manage their direct reports lack formal AI training. However, given how rapidly AI tools have entered workplaces, there are no agreed-upon standards for what adequate training even is — a problem exacerbated by an ongoing lack of regulation. 

The report’s authors warned about the risks of using AI blindly.

“While AI can support data-driven insights, it lacks context, empathy, and judgment,” Resume Builder’s chief career advisor Stacie Haller said in the report. “Organizations have a responsibility to implement AI ethically to avoid legal liability, protect their culture, and maintain trust among employees.” 

Still, what “ethical” implementation means remains opaque. Resume Builder did not include any guidelines for what this looks like, nor did the survey ask managers to report their own definitions or instincts on where using AI to manage is more or less appropriate. 

Also: Anthropic’s Claude dives into financial analysis. Here’s what’s new

“Ethical usage of AI in management would need radical transparency for the employees, giving them a voice in the decision-making of what system should be used and showing exactly why — and most importantly, how — they are evaluated,” Hilke Schellmann, AI expert and author of “The Algorithm,” told ZDNET. 

Schellmann added that employees should have a way to appeal decisions made by algorithms, especially when they can be as consequential as a layoff. “Honestly, the best way to use AI in management is to use algorithms that help employees — and are not accessible to management,” she added, “but there seems to be no appetite for that, at least in the US.”  

Ethics and controversy 

AI tools have become common in hiring and other HR functions. Resume Builder’s survey found that most managers reported being encouraged by their company to manage reports with AI, which most commonly refers to tightening up efficiency, reducing overhead, and evaluating data more quickly. 

Also: The best free AI courses and certificates in 2025 – and I’ve tried many

But as many critics have pointed out, these sensitive use cases are where AI’s biases can be most damaging. In 2021, New York City passed Local Law 144 to address AI bias. One of the first laws of its kind, it requires any automated employment decision tools (AEDT) to be routinely audited for bias — at least once a year when in use — and to have the results of that audit published. 

However, the law has been criticized for defining AEDT too narrowly, allowing enforcement and compliance on the part of companies to dwindle. 

Without explicit worker protections or mandated avenues for employees to appeal outcomes, AI use in personnel is essentially at the discretion of individual companies. Regulators could turn their attention to this AI use case by creating more robust transparency requirements and processes that companies need to adhere to if they are using AI tools in ways that could impact employees. 

Privacy concerns

One 2023 paper from the Society for Human Resource Management (SHRM) notes that employees should have a right to know when and how AI is being used, ask questions, and opt out where applicable — something Local Law 144 also requires, specifically in hiring. It’s unclear how many of the managers surveyed, if any, have made their reports aware of their AI use.

Also: Does your generative AI protect your privacy? New study ranks them best to worst

The Resume Builder survey did not ask questions pertaining to the information managers shared with AI tools about their direct reports. If managers include performance details, salary, and other potentially sensitive data with chatbots, especially without employees’ consent, they could be creating a serious privacy problem that employees don’t have control over. 

How employees can advocate for themselves 

What can employees do if they are concerned about AI-generated decisions affecting the future of their role? It depends on how AI is being used on them. 

Also: Don’t be fooled into thinking AI is coming for your job – here’s the truth

“We see AI being used for more and more surveillance of employees, starting with hourly workers to white collar ones,” Schellmann pointed out. “I would suggest that workers band together and work with their unions and write in their bargaining agreements that surveillance technology has to be disclosed and needs co-decision making with representatives of the union.”

Beyond surveillance, employees should ask their managers, where applicable, for transparency on how AI tools are being used — although norms around feedback and how managers come to conclusions even without AI tools may make that difficult to navigate. 

Want more stories about AI? Sign up for Innovation, our weekly newsletter.

Leave a Comment