Sora 2 System Card
Sora 2 是我们最新的最先进视频与音频生成模型。在 Sora 的基础上,这一新模型引入了以往视频模型难以实现的能力——例如更准确的物理表现、更鲜明的真实感、音画同步、增强的可控性,以及更广泛的风格表现范围。该模型能高度忠实地遵循用户指令,使得创作既富有想象力又符合现实世界动态的视频成为可能。Sora 2 扩展了叙事与创意表达的工具箱,同时也是朝着能够更准确模拟物理世界复杂性的模型迈出的一步。Sora 2 将可通过 sora.com、全新的独立 iOS Sora 应用获得,未来也会通过我们的 API 提供。
Sora 2 的先进能力带来了需要考虑的新潜在风险,包括未经同意使用肖像或产生误导性内容。为应对这些问题,我们与内部红队成员合作,识别新的挑战并制定相应的缓解措施。我们采取迭代的安全策略,重点关注那些语境尤其重要或风险仍在出现且尚未完全可知的领域。
我们的迭代部署包括通过有限邀请逐步开放 Sora 2 的初始访问、限制上传包含照片真实感人物的图片及所有视频上传,以及对涉及未成年人的内容施加严格的防护与审核阈值。我们将继续从用户使用 Sora 2 的情况中学习,完善系统以在保障安全的同时最大化创作潜力。本系统卡描述了该模型的能力、潜在风险以及 OpenAI 为安全部署 Sora 2 所制定的安全措施。
----------------------
Sora 2 is our new state of the art video and audio generation model. Building on the foundation of Sora, this new model introduces capabilities that have been difficult for prior video models to achieve– such as more accurate physics, sharper realism, synchronized audio, enhanced steerability, and an expanded stylistic range. The model follows user direction with high fidelity, enabling the creation of videos that are both imaginative and grounded in real-world dynamics. Sora 2 expands the toolkit for storytelling and creative expression, while also serving as a step toward models that can more accurately simulate the complexity of the physical world. Sora 2 will be available via sora.com, in a new standalone iOS Sora app, and in the future it will be available via our API.
Sora 2’s advanced capabilities require consideration of new potential risks, including nonconsensual use of likeness or misleading generations. To address these, we worked with internal red teamers to identify new challenges and inform corresponding mitigations. We’re taking an iterative approach to safety, focusing on areas where context is especially important or where risks are still emerging and are not fully understood.
Our iterative deployment includes rolling out initial access to Sora 2 via limited invitations, restricting the use of image uploads that feature a photorealistic person and all video uploads, and placing stringent safeguards and moderation thresholds on content involving minors. We’ll continue to learn from how people use Sora 2 and refine the system to balance safety while maximizing creative potential. This system card describes the model’s capabilities, potential risks, and the safety measures OpenAI has developed for a safe deployment of Sora 2.
via OpenAI News
Sora 2 是我们最新的最先进视频与音频生成模型。在 Sora 的基础上,这一新模型引入了以往视频模型难以实现的能力——例如更准确的物理表现、更鲜明的真实感、音画同步、增强的可控性,以及更广泛的风格表现范围。该模型能高度忠实地遵循用户指令,使得创作既富有想象力又符合现实世界动态的视频成为可能。Sora 2 扩展了叙事与创意表达的工具箱,同时也是朝着能够更准确模拟物理世界复杂性的模型迈出的一步。Sora 2 将可通过 sora.com、全新的独立 iOS Sora 应用获得,未来也会通过我们的 API 提供。
Sora 2 的先进能力带来了需要考虑的新潜在风险,包括未经同意使用肖像或产生误导性内容。为应对这些问题,我们与内部红队成员合作,识别新的挑战并制定相应的缓解措施。我们采取迭代的安全策略,重点关注那些语境尤其重要或风险仍在出现且尚未完全可知的领域。
我们的迭代部署包括通过有限邀请逐步开放 Sora 2 的初始访问、限制上传包含照片真实感人物的图片及所有视频上传,以及对涉及未成年人的内容施加严格的防护与审核阈值。我们将继续从用户使用 Sora 2 的情况中学习,完善系统以在保障安全的同时最大化创作潜力。本系统卡描述了该模型的能力、潜在风险以及 OpenAI 为安全部署 Sora 2 所制定的安全措施。
----------------------
Sora 2 is our new state of the art video and audio generation model. Building on the foundation of Sora, this new model introduces capabilities that have been difficult for prior video models to achieve– such as more accurate physics, sharper realism, synchronized audio, enhanced steerability, and an expanded stylistic range. The model follows user direction with high fidelity, enabling the creation of videos that are both imaginative and grounded in real-world dynamics. Sora 2 expands the toolkit for storytelling and creative expression, while also serving as a step toward models that can more accurately simulate the complexity of the physical world. Sora 2 will be available via sora.com, in a new standalone iOS Sora app, and in the future it will be available via our API.
Sora 2’s advanced capabilities require consideration of new potential risks, including nonconsensual use of likeness or misleading generations. To address these, we worked with internal red teamers to identify new challenges and inform corresponding mitigations. We’re taking an iterative approach to safety, focusing on areas where context is especially important or where risks are still emerging and are not fully understood.
Our iterative deployment includes rolling out initial access to Sora 2 via limited invitations, restricting the use of image uploads that feature a photorealistic person and all video uploads, and placing stringent safeguards and moderation thresholds on content involving minors. We’ll continue to learn from how people use Sora 2 and refine the system to balance safety while maximizing creative potential. This system card describes the model’s capabilities, potential risks, and the safety measures OpenAI has developed for a safe deployment of Sora 2.
via OpenAI News