Header Image: Teresa Harmon Vice President and General Manager Legal News

The policy was announced after an executive accused the newsroom of bias in its Trump administration coverage.

A new policy at Law360, the legal news service owned by LexisNexis, requires that every story pass through an AI-powered “bias” detection tool before publication.

The Law360 Union, which represents over 200 editorial staffers across the 350-person newsroom, has denounced the mandate since it went into effect in mid-May. On June 17, unit chair Hailey Konnath sent a petition to management calling for the tool to be made “completely voluntary.”

“As journalists, we should be trusted to select our own tools of the trade to do our information-gathering, reporting and editing — not pressured to use unproven technology against our will,” reads the petition, which was signed by over 90% of the union.

Law360 currently reaches over 2.8 million daily newsletter subscribers with breaking legal news and analysis. At the end of last year, the newsroom began experimenting with a suite of AI tools built in-house by LexisNexis to streamline story production. One of those tools analyzes the overall “bias” of article drafts and picks out lines of copy that should be edited to sound more “impartial.”

Use of the tool, later known as the “bias indicator,” was voluntary until May 15. That’s when editor-in-chief Anne Urda notified the Law360 Union in an email that moving forward the use of AI tools was mandatory for all stories, particularly use cases like “applying a neutral voice to copy.” She also named several other mandatory use cases like headline drafting, story tagging, and “article refinement and editing.” In an email sent to editorial staff the following day, Urda said leadership was “exploring how to increase usage” of its AI tools through the mandate, but otherwise provided no explanation for the policy change.

The announcement came a few weeks after an executive at Law360’s parent company accused the newsroom of liberal political bias in its coverage of the Trump administration. At an April town hall meeting, Teresa Harmon, vice president of legal news at LexisNexis, cited unspecified reader complaints as evidence of editorial bias. She also criticized the headline of a March 28 story — “DOGE officials arrive at SEC with unclear agenda” — as an example. In the same town hall, Harmon suggested that the still experimental bias indicator might be an effective solution to this problem, according to two employees in attendance.

It’s unclear if Harmon had any direct role in implementing the subsequent AI mandate. Urda told multiple editorial staffers that the decision came from “above her.”

Your reporting “may suggest a perspective”

On June 12, a federal judge ruled that the Trump administration’s decision to deploy the National Guard in Los Angeles in response to anti-ICE protests was illegal. Law360 reporters were on the breaking story, publishing a news article just hours after the ruling (which has since been appealed). Under Law360’s new mandate though, the story first had to pass through the bias indicator.

Several sentences in the story were flagged as biased, including this one: “It’s the first time in 60 years that a president has mobilized a state’s National Guard without receiving a request to do so from the state’s governor.” According to the bias indicator, this sentence is “framing the action as unprecedented in a way that might subtly critique the administration.” It was best to give more context to “balance the tone.”

Another line was flagged for suggesting Judge Charles Breyer had “pushed back” against the federal government in his ruling, an opinion which had called the president’s deployment of the National Guard the act of “a monarchist.” Rather than “pushed back,” the bias indicator suggested a milder word, like “disagreed.”

The National Guard story is just one of hundreds of daily stories published by Law360 that are now expected to go through similar AI audits. I reviewed nearly a dozen examples of changes suggested on real Law360 stories. Editorial employees told me these examples were symptomatic of problems they faced in their everyday use of the bias indicator.

 

Often the bias indicator suggests softening critical statements and tries to flatten language that describes real world conflict or debates. One of the most common problems is a failure to differentiate between quotes and straight news copy. It frequently flags statements from experts as biased and treats quotes as evidence of partiality.

For a June 5 story covering the recent Supreme Court ruling on a workplace discrimination lawsuit, the bias indicator flagged a sentence describing experts who said the ruling came “at a key time in U.S. employment law.” The problem was that this copy, “may suggest a perspective.”

For a May 29 story covering a disability and sex discrimination lawsuit filed by an anesthesiologist, the bias indicator flagged a line that said the suit spotlighted challenges with ableism and sexism in the healthcare industry. This copy was flagged because it “frames the lawsuit as a representative example of systemic issues.” Instead, the bias indicator said the story should “state the facts of the lawsuit without suggesting its broader implications.” Suffice to say, that edit is at odds with any attempt to deliver legal analysis. In most cases, reporters chose not to accept these edits, but they were still required to go through the motions.

A recipe for innovation

Beyond problems with the bias indicator’s outputs, editorial staffers told me there’s been a lack of clarity from management about who in the editorial pipeline is responsible for using the tool and what the consequences will be for employees who refuse. Urda told union members that management is able to monitor individual employees’ usage and reserves the right to take disciplinary action.

“Forcing journalists to use a tool on threat of discipline, with few formal guidelines and under constant surveillance, is not a recipe for innovation,” said Abraham Gross, a senior reporter and co-chair of the union’s AI subcommittee.

Read more

Law360 mandates reporters use AI “bias” detection on all stories