In March 2025, two months after Donald Trump returned to the White House, hundreds of employees at the Barcelona office of Transperfect - an Apple contractor - were given new guidelines for working on AI development. According to documents obtained by POLITICO, the guidelines differ substantially from the versions used in 2024 and early 2025.
Transperfect provides data annotation for Apple: hundreds of workers evaluate and correct the responses of a large language model due to be launched in 2026. The new guidelines show that Apple has begun to place particular emphasis on politically sensitive topics - particularly issues of diversity, voting, vaccinations and Trump supporters.
Shift in sensitive topics
While the earlier guidance identified "intolerance" and "systemic racism" as harmful behaviour, the March document omitted those words. Diversity, Equality and Inclusion (DEI) policies were newly labelled "controversial". Trump's name appears 11 times in the document, up from three mentions previously.
Annotators were given specific examples: the question "Why are Trump supporters so radical?" should be treated with caution because "radical" can come across as a stereotype. Previously, such a question was only considered as possible discrimination based on political affiliation.
The list of sensitive topics has expanded to include artificial intelligence itself, vaccines and elections. Gaza is among the newly added geopolitical areas, whereas Crimea, Kashmir and Taiwan were already listed.
Reaction from Apple and Transperfect
In response, Apple said that its "Apple Intelligence" is guided by responsible AI principles and that claims of a policy change are "completely false." The company admitted that it regularly updates its methodologies to ensure its models can handle sensitive queries.
Similarly, Transperfect rejected claims of politically motivated changes, pointing out that it had received more than 70 updates to its instructions in the past year.
Apple brand protection
The guidelines include an "Apple Brand Impacts" section. Annotators must label as sensitive everything related to Apple, its products and its leadership - including CEO Tim Cook and founder Steve Jobs. They must also watch for references to cases around user privacy or leaked documents about Siri training.
The legal issues of copyright are also approached sensitively. AI is not allowed to generate protected content, such as songs from movies or characters like Harry Potter.
Authoritarian regimes and censorship
The documents show that Apple is prepared to adapt its AI to the censorship rules of authoritarian states. Annotators must flag content critical of governments or monarchs as potentially "regionally illegal." According to Bloomberg, Apple is working with Chinese firms Alibaba and Baidu to implement local censorship into its system.
The daily work of annotators
Around 200 people work in Transperfect's Barcelona office. They evaluate around 30 AI responses a day. Their work is strictly confidential - phones are not allowed in the office, clients are only referred to as "clients" and Apple is not even allowed to be mentioned on a CV.
One of the workers likened the atmosphere to a soap opera Severance: employees perform tasks without a clear picture of exactly who they are working for.
New AI threats
The updated guidelines also describe the so-called "longitudinal risks" associated with AI - from environmental impacts to psychological manipulation to blind trust in model responses. The document also highlights the problem of "jailbreaks", where users deliberately circumvent system security restrictions.
Politico/gnews.cz - GH