New Delhi: In a recent, unpublicized revision to its usage policy, OpenAI has modified its stance on allowing military applications of its technologies. The previous policy explicitly prohibited the use of OpenAI products for military and warfare purposes. However, the language restricting such applications has now been removed, and OpenAI has not denied the shift towards permitting military uses.
The Intercept was the first to observe this change, which appears to have been implemented on January 10. It’s not uncommon for tech companies to make unannounced adjustments to their policy wording as their products evolve, and OpenAI is no exception. The recent announcement of the public rollout of user-customizable GPTs, coupled with a vaguely articulated monetization policy, likely prompted some revisions.
It’s important to note that this alteration in the no-military policy cannot be attributed to the introduction of a specific new product. The statement from OpenAI attempting to characterize the change as merely making the policy “clearer” or “more readable” is contested, as it represents a substantive and consequential shift in policy rather than a mere restatement of the existing guidelines.