Over the past year, Adobe has consistently enhanced its generative AI capabilities. Their latest endeavor involves the creation of experimental music-making tools named Project Music GenAI Control. This tool is characterized as an early-stage generative AI music generation and editing platform, allowing creators to produce audio based on text prompts. This initiative mirrors similar AI models that translate text into other forms of media, such as images or videos.
According to Adobe, Project Music employs technology similar to that used in Firefly, its existing project. The tool can generate music based on various text prompts, including descriptors like “powerful rock,” “happy dance,” and “sad jazz.”
Apart from generating music from scratch, users have the option to adjust musical elements such as tempo and structure. They can also loop specific sections of the composition or extend its duration.
Nicholas Bryan, a Senior Research Scientist at Adobe Research, described Project Music GenAI Control as a collaborative tool where generative AI acts as a co-creator, assisting individuals in crafting music suited to their projects’ mood, tone, and duration. While still under development, Project Music GenAI Control is not yet accessible to the public.