A popular book app’s AI has been scrapped after being deemed ‘bigoted and racist’.
Fable, a social media app for book enthusiasts, used an AI to create a Spotify-like ‘wrapped’ experience, summarising users’ reading habits throughout the year.
However, outraged readers soon complained that the feature, designed to offer a ‘playful roast’, was lashing out with racist putdowns.
One user was shocked when the app told them to ‘surface for the occasional white author’ after spending the year reading ‘Black narratives and transformative tales’.
Another was slammed by their AI summary as a ‘diversity devotee’, with the app questioning whether they were ‘ever in the mood for a straight, cis white man’s perspective’.
Meanwhile, a user who had been reading books about people with disabilities was told their choices could ‘earn an eye-roll from a sloth’.
In an update on Instagram, Fable says it is now pulling all features that use AI and introducing new safeguards into the app.
But that has not stopped users from flocking to social media with calls to delete the book app altogether.
Fable, a social media app for book enthusiasts, used an AI to create a Spotify-like ‘wrapped’ experience, summarising users’ reading habits throughout the year. However, outraged readers soon complained that the feature, designed to offer a ‘playful roast’, was lashing out with racist putdowns
Popular book app Fable (pictured) has been forced to axe its AI features after automatically generated summaries were found to be ‘racist and bigoted’
The Fable ‘Reading Wrap’ was introduced in 2023 to provide a monthly summary of users’ reading habits.
For the 2024 end-of-year edition, the app also added an AI-generated summary which the company said would ‘celebrate everybody’s uniqueness’.
However, things took a dark turn when book influencer Tiana Trammell shared a screenshot of her wrap which criticised her for not reading enough white authors.
In a post on Threads, Ms Trammell wrote: ‘I typically enjoy my reader summaries but this particular one is not sitting well with me at all’.
As her post went viral, Fable’s official account responded saying that the issue was being looked into.
Fable wrote: ‘This never should have happened. Our reader summaries are intended to both capture your reading history and playfully roast your taste, but they should never, ever, include references to race, sexuality, religion, nationality, or other protected classes.
‘We take this very seriously, apologize deeply, and appreciate you holding us accountable.’
Yet Ms Trammell was soon joined by other users who found their end-of-year reviews insulting.
Another user was branded by the AI as a ‘diversity devotee’, while the review questioned whether he was ‘ever in the mood for a straight, cis white man’s perspective’
Writer Danny Groves took to social media after the AI branded him as a ‘diversity devotee’ who wasn’t interested in a ‘straight, cis white man’s perspective’.
In a post on Threads, Mr Groves wrote: ‘My reader summaries have been spicy before but never quite like the one that’s being spread. It was pretty disorienting when I first read it.
‘Now I’m reflecting on others and wondering if there was a change to the algorithm that created my recent one, or if it’s always been heading this way.’
Likewise, another user shared a screenshot of a review which said that their taste in books had ‘set the bar for my cringe meter’.
Taking to Threads to complain, the user wrote: ‘Hey @fable, I KNOW there’s worse out there, but can you also tell your AI to stop insulting the work of romance authors & not continue to put down the romance genre by calling it cringe?’
Following the posts, Chris Gallello, Fable’s head of product, addressed this issue in a video.
Although he did not specifically refer to any examples, Mr Gallello said that Fable had begun to receive complaints about ‘very bigoted racist language’.
Mr Gallello added: ‘That was shocking to us because it’s one of our most beloved features, so many people in our community love this feature and to know that this feature has generated language is a sad situation.’
Another commenter was outraged after the AI-generated summary blasted her taste as having ‘set the bar for my cringe meter’
Mr Gallello explained that the AI worked by taking the summaries of the books that had most recently been finished and asking it to generate a few ‘playful’ sentences describing the user’s reading habits.
Initially, Mr Gallello maintained that the AI features would remain in place with the addition of new safeguards.
However, in a follow-up post, the company confirmed that it would be removing all features which use AI.
That includes the reader stats feature, the 2024 reading wrap, and the ‘shelfie’ which generated a personalised bookshelf based on a reader’s eight favourite books or TV shows.
Fable also held a Town Hall meeting for the app’s users to discuss what measures could be taken to restore trust in its services.
While many applauded Fable’s openness and transparency on the issue, some angry have already taken to social media with calls to delete the app.
One outraged commenter wrote: ‘Just saw what fable did and oh my GOD immediately deleted that app wtf ew.’
‘I only installed fable today should I delete it?’ asked another.
While some commenters commended Fable’s decision to remove the AI features, others said they had already deleted the app
One commenter questioned whether they should delete the app after only having it installed for a day
Another commenter said that they had already deleted the Fable app as a consequence of its racist AI-generated summaries
While another added: ‘I downloaded fable so I could stop using Goodreads and the very next day the AI drama came out and it was swiftly deleted.
Likewise, Ms Trammell wrote in a post on Threads: ‘After the public statement of @fable, I’ve deactivated my account and deleted the application.
‘Their commitment is to AI usage and rather than ensuring safety of their users within the platform.’
Kimberly Marsh Allee, Fable’s head of community, told MailOnline: ‘Last week, we discovered that two of our community members received AI generated reader summaries that are completely unacceptable to us as a company and do not reflect our values.
‘We take full ownership for this mistake and are deeply sorry that this happened at all. We have heard feedback from enough members of our community on the use of generative AI for certain features that we have made the decision to remove these features from our platform effective immediately.’
This is not the first time that a popular AI has been accused of racism.
In April last year, Google’s Gemini AI faced accusations of being racist towards white people after it refused to create images of Caucasian individuals.
The tool used simple text prompts to create artificial images within seconds but refused to make any images of white people as Popes, Vikings, or country music fans.
Fable is not the first AI to encounter accusations of racism. Last year Google’s Gemini AI was criticised for its inability to show images of white people as Popes, Vikings, country music, and even Nazi soldiers
In one particularly egregious example, the AI-generated an image of what appeared to be a black Nazi soldier after being asked to show a German WWII soldier.
Google was forced to temporarily shut down the image generation tool while engineers addressed the ‘completely unacceptable’ errors.
In a rare public statement at the time, Google co-founder Sergey Brin said: ‘We definitely messed up.’
Similarly, Meta’s AI image generator was slammed for its ‘racist’ biases after users found that the bot was unable to show an Asian man with a white woman.
No matter how the prompts were phrased, the bot refused to show mixed-race couples featuring Asian men.
It is believed that these errors were either created by a lack of mixed-race couples in the training data or the result of a failed attempt to over-correct against racist portrayals.