Guides
Before using these guides, make sure you have access to the Epidemic Sound Partner API. See Getting Started Prerequisites for details on obtaining access.
Music browsing
This guide will walk you through the steps to build a music browsing UI, fetch available playlists, moods and genres with their tracks, play a track, download a track and how to report usage of a track.
List genres with their tracks
Use the genres endpoint to list the available genres.
We recommend that you specify the “type” featured to only show the genres that are featured by our curation team and order by relevance to make sure that the order is based on the most popular genres.
Parent genres have cover art that you can show in your interface.
Use the genre details endpoint to show the tracks for a given genre.
List playlists with their tracks
Use the collections endpoint to display playlists curated by our team of in-house experts.
Collections have attributes like title and cover art, and you can choose to return collections with or without tracks.
The response will include a maximum of 20 tracks per collection. If a collection contains more than 20 tracks, use the endpoint '/collections/{collectionId}' to get all tracks.
For better performance, especially with large collections, use excludeFields=tracks to list only collection metadata. You can then fetch tracks for specific collections separately using the collection details endpoint.
List moods with their tracks
Use the moods endpoint to allow users to browse the music catalog based on moods like happy, epic or relaxing. Moods have cover art that you can show in your interface.
We recommend that you specify the “type” featured to only display moods that are featured on epidemicsound.com.
Use the mood details endpoint to show the tracks for a given mood.
Search for music
The Partner API provides multiple powerful search capabilities to help users find the perfect track:
Text-based search
Use the search endpoint to search for any given query within our music library. The search endpoint uses an open language model that allows users to use semantic search terms such as "music for a calm beach scene" or "high energy track for a workout".
The search endpoint indexes track attributes including moods, genres, artist names, song titles, and BPM. You can further refine results by:
- Filtering by specific genres and moods
- Sorting results using
sort(Relevance, Date, Title) andorder(asc, desc) - Controlling pagination with
offsetandlimitparameters (default 50, max 60)
Pagination:
Search results are paginated. The response includes a pagination object and a links object to help you navigate through results:
{
"tracks": [...],
"pagination": {
"page": 1,
"limit": 50
},
"links": {
"next": "/v0/tracks/search?term=rock&limit=50&offset=50",
"prev": null
}
}
pagination.page: Current page number (1-based)pagination.limit: Number of results per pagelinks.next: URL for the next page (nullif no more results)links.prev: URL for the previous page (nullif on first page)
To fetch the next page, use the links.next URL directly, or add an offset parameter to your request (increment by limit for each page).
Find music by Spotify reference
Let users search by typing a song they already know. The API finds Epidemic Sound tracks that sound similar to any Spotify track.
Users often think in references: "I want something that sounds like Never Gonna Give You Up". This feature lets them search that way—no music vocabulary needed.
How it works:
- User types a song name (e.g., "never gonna give")
- Search suggestions endpoint returns Spotify matches (e.g., "Never Gonna Give You Up - Rick Astley")
- User selects the Spotify suggestion
- Pass the Spotify URL to the search endpoint
- User gets Epidemic Sound tracks with a similar vibe
Example autosuggest response:
{
"suggestions": [
{
"value": "https://open.spotify.com/track/4cOdK2wGLETKBW3PvgPWqT",
"title": "Never Gonna Give You Up - Rick Astley",
"type": "external/spotify"
}
]
}
When the user selects this suggestion, use the value (the Spotify URL) as the term parameter in your search request.
Implementation tips:
- Show a Spotify icon next to Spotify suggestions so users know what they're selecting
- Debounce autosuggest calls (e.g., 300ms delay) to avoid excessive API requests
- Users can also paste Spotify URLs directly into your search field
Search autosuggest
Use the search suggestions endpoint to provide autocomplete suggestions as users type. The endpoint returns suggestions with a value (the search term to pass to the search endpoint) and a title (display text for your UI).
Suggestions can be of type TEXT (regular search terms) or external/spotify (Spotify track references).
Find similar tracks
Use the similar tracks endpoint to retrieve tracks that share similar characteristics with a reference track. This is useful when users want to:
- Replace a track that doesn't fit perfectly
- Discover multiple alternatives to a track they like
- Explore tracks with similar genre, mood, tempo, and overall feel
Simply provide a trackId, and the endpoint returns a list of similar tracks based on musical characteristics.
Audio-based similarity search
Use the similar sections endpoint to find track segments that match a provided audio file. This endpoint:
- Takes an uploaded audio file reference (audioUploadId)
- Accepts start and end timestamps (in milliseconds) to define the section you want to match
- Returns track sections from the Epidemic Sound library that sound similar to your audio reference
This is particularly useful for template remixing or when users have reference audio from other sources.
Play a track
For real-time playback while users browse the music library, we provide HLS (HTTP Live Streaming) endpoints.
Available streaming endpoints:
We offer two HLS endpoints depending on your implementation needs:
- Streaming endpoint - Returns the HLS manifest URL directly
- HLS with cookies endpoint - Returns the HLS manifest URL along with authentication cookies. Since HLS consists of multiple files, access is controlled using cookies that must be set for the CDN domain.
Why HLS for previewing:
- Smaller file transfers: Audio is encoded using the AAC standard, which has a smaller footprint than MP3 for similar quality
- Adaptive quality: The HLS client library automatically switches between two variant quality streams based on network speed
- Seeking support: Users can skip forward/backward during playback
Implementation:
The format consists of audio files split into smaller chunks with manifests (.m3u8 files) that reference these audio files. HLS client libraries handle the complexity automatically:
- iOS/Safari: Native HLS support via AVFoundation (no additional library needed)
- Web browsers: Use hls.js library
- Android: ExoPlayer with HLS support
Here is an example app for iOS that plays HLS streams.
Access to tracks for preview depends on your partnership agreement. See the FAQ section on content access for details.
Preview vs. Download:
Use HLS streaming for previewing tracks during browsing. When users want to add a track to their project, use the download endpoint (covered in section 6) which provides MP3 files.
Download a track
When a user wants to add a track to their project for editing or export, use the download endpoint to get an MP3 file.
Available qualities:
- Normal (128kbps): Sufficient for most use cases
- High (320kbps): For content requiring higher audio quality
The download links expire after 24 hours (normal quality) or 1 hour (high quality). The expiration time is included in the response.
Access to tracks for download depends on your partnership agreement. See the FAQ section on content access for details.
Report usage of a track
Report usage of a track to the usage endpoint when a user exports their content to social media or downloads the content file to their device.
You can specify which platform (YouTube, Twitch, Instagram, Facebook, TikTok, Twitter or “other”) they exported the file to. Use the platform “local” if the user downloads the content to their device.
This data is used for attribution and analytics purposes as well as to improve personalization.
If you prefer to report events in bulk, you can also use the bulk reporting endpoint.
Understanding track response fields
Track responses include several important fields you should be aware of:
isPreviewOnly: Whentrue, the track can only be streamed for preview but not downloaded. This depends on the user's subscription status and your partnership agreement.isExplicit: Whentrue, the track contains explicit content. Consider filtering or flagging these tracks in your UI if appropriate for your audience.hasVocals: Whentrue, the track contains vocals. Useful for allowing users to filter for instrumental-only tracks.
Sound effects browsing
This guide will walk you through the steps to build a sound effect browsing UI, fetch available categories and play/download a sound effect.
Content Access: All partners have access to sound effects, and they are all available to download.
About sound effect metadata: Sound effect responses include basic information (id, title, length, added, images). Client-side filtering of results can only be done using the title or length fields.
List sound effect categories
The sound effects categories endpoint returns category metadata for browsing the sound effects catalog.
The categories endpoint currently returns limit + 1 items instead of the requested limit, which can cause duplicate items across pages. To work around this, slice the response array to your desired length.
Understanding the category hierarchy:
Sound effects are organized in a parent-child hierarchy. Only leaf categories (categories without children) contain actual sound effects. Parent categories serve as grouping containers.
- Some categories have cover art that you can display in your interface
- Use the
typeparameter with valuefeaturedto only show categories curated by our team (default isall) - The categories endpoint returns only metadata - to get the actual sound effects, use the [sound effect details endpoint] (https://developers.epidemicsound.com/docs/Endpoints/get-sound-effect-category-tracks)
Note: Browsing categories and searching are mutually exclusive. When a user enters a search term, disable category browsing, and vice versa.
List the sound effects within a category
Use the sound effect details endpoint to display all sound effects within a specific category.
Remember that only leaf categories (those without children) will contain sound effects.
Search for Sound effects
Use the sound effects search endpoint to allow users to search within the sound effects library.
Sorting and pagination:
- Use
sortto order results:best-match,newest,popular,length, ortitle - Use
orderto specify direction:ascordesc - Pagination: default limit is 50, maximum is 60 per request
Pagination works the same way as music search—the response includes pagination and links objects. Use links.next to fetch the next page, or increment offset by limit in your next request.
Best practices:
- When using sort
best-match, use orderdescfor optimal results - Disable category browsing when search is active
Play or download a sound effect
Use the download sound effect endpoint to play or download a specific sound effect.
Important considerations:
Since most sound effects have small file sizes, we do not offer a separate streaming endpoint. However, some ambient sounds can be longer than 3 minutes, which may take time to download.
Recommended implementation:
- Show a loading spinner or progress indicator while the file downloads
- This provides better user experience, especially for longer ambient sound files
Advanced music features
These endpoints provide additional capabilities for building more sophisticated music experiences.
Popular segments (highlights)
Use the highlights endpoint to get the most popular section of a track. Powered by machine learning trained on billions of YouTube streams, this endpoint recommends the best time sections for your use case.
Use cases:
- Start playback from the most engaging part of the track
- Recommend the right section for short-form content (Reels, TikTok, Shorts)
The endpoint accepts up to 5 different durations per request (5-60 seconds each) and returns start/stop timestamps in milliseconds.
Beat timestamps
Use the beats endpoint to get precise beat timestamps for a track. Unlike BPM which is a single number, beats data captures dynamic tempo changes throughout the track.
Use cases:
- Automatically cut video clips in sync with the beat
- Add snap markers aligned with beat timestamps in your editing UI
The response includes time (timestamp in milliseconds) and value (beat position in a bar, where 1 is the downbeat).
Filter tracks by multiple criteria
Use the tracks endpoint to list tracks filtered by mood, genre, and BPM range simultaneously. This provides more granular control than browsing individual genres or moods.
Batch track metadata
Use the track metadata endpoint to fetch metadata for multiple tracks at once by providing a list of track IDs. This is more efficient than making individual requests when you need details for several tracks.
Edit versions (Beta)
This feature is currently in beta. The API may change based on feedback.
The edit versions feature allows you to generate shorter versions of a track to fit a specific timeline duration. This is useful when you need a track to match a specific video length.
Important: This is different from the "Adapt" feature on the Epidemic Sound website. Edit versions only adjust the track's duration—they do not change the mood, energy, or musical characteristics of the track.
How it works
-
Start a job: Use the start track versions endpoint to request an edited version of a track with your desired duration (between 1 second and 5 minutes).
-
Check job status: The job is processed asynchronously. Use the track versions status endpoint to check when it's complete.
-
Get the result: When complete, the response includes URLs for both preview and high-quality versions of the edited track. URLs expire after 24 hours.
Note: Longer durations result in increased processing time, as latency scales with the desired duration.
Get recommendations based on your video
Spend less time searching and more creating by using the video itself to find relevant music.
The Soundmatch feature analyzes video frames and provides a list of recommended tracks perfectly suited to the video’s visual scenes. This guide will walk you through how to implement Soundmatch in your app, allowing users to delete their content and the partner partner requirements to use this feature.
But let’s first start with an example UI. In the example below, users can click to “Get music recommendations based on the current frame”. When the user clicks the button, you send an image from the user’s video to Epidemic Sound. Based on the content of the frame, we’ll give a list of recommended tracks in return.

Soundmatch: get music recommendations
Implementation steps
- Upload an image and receive the corresponding imageID
- Get recommendations based on imageID
Upload image
The first step is to post an image from the user’s video to the image upload endpoint. That will give you an ImageID in return that you will use in the next step to get the actual track recommendations.
We currently only support jpeg files, and the size limit for the file is 2MB.
Get recommendations based on imageID
Use the ImageID from the previous step to get track recommendations by using the matching image endpoint. This will give you a list of tracks to show to your user.
Soundmatch: Delete frames
If you wish to implement the Soundmatch feature, we have a strict requirement that you offer the capability for users to delete their uploaded content. We recommend that you place this functionality in the settings of your app.
A. Delete all frames that a user uploaded
Users who can upload their content to get recommendations should also be able to easily delete all frames that they have previously uploaded.
Use the delete user images endpoint to delete all images for the specified userID.
B. Delete specific frames
We also offer the functionality to delete a specific image based on the imageID. Use the delete images endpoint using the imageID as a reference.
Soundmatch: Partner requirements
We’re very excited to partner with you to offer a better soundtracking experience for your users. However, since some of the uploaded images can potentially be considered personal data we want to make sure that we respect the users’ privacy.
If you wish to implement Soundmatch, we have the following requirements towards you as a partner:
- Update your privacy policy to reflect that you are sharing the user’s personal data with us.
- Allow users to delete their uploaded content. Please see the guide to Delete all frames that a user uploaded above.
Remix your templates
Breathe fresh life into your templates by letting users replace the default music with a song with their creative flair! No matter if your templates have music from Epidemic Sound or not, our API utilizes EAR (Epidemic Audio Reference) to instantly find music that sounds similar to any part of another track.
In the example UI below, a user has started the editing process with one of your templates. When they click to Swap out the music they get a list of other tracks to try. With just a click of a button, users get access to a new world of content!
The guides below will walk you through the process to send a reference and get back track recommendations.
Templates with Epidemic Sound music
Each track in the Epidemic Sound library has a trackID. If your template has a default sound from Epidemic Sound, you can use the similar sections endpoint to get recommendations of similar track segments. Specify in the request which track you want recommendations for, and the timestamps of the section that is used in your template.
In return, you will get a list of tracks and timestamps for the sections that sound similar to the default sound.
Templates with music from another music provider
This guide will walk you through the steps to upload any music file and get music recommendations from Epidemic Sound’s library in return. Please make sure that the agreement with your music provider allows you to use their music as a reference.
Implementation steps
- Upload an audio file
- Get recommendations based on audioId
- Check if the file is already uploaded
1. Upload an audio file
The first step is to post an audio file to the audio upload endpoint. That will give you an AudioID in return that you will use in the next step to get the actual track recommendations.
We currently support the formats: mp3, mpeg, ogg and vorbis. Maximum file size is 3.5MB. You can only upload music with the partner token, since we currently do not allow end users to use this feature.
2. Get recommendations based on audioId
Use the AudioID from the previous step to get track recommendations by using the matching audio endpoint. This will give you a list of tracks to show to your user.
3. Check if the file is already uploaded
To avoid duplicate uploads we provide an additional endpoint to get the audioID of an already uploaded asset. Use the checksum endpoint to get the AudioID based on the checksum of the uploaded file.
Waveforms
All tracks in the library come with a waveform. Waveforms can be used for the following use cases:
- From our user research we know that users look at the waveform to understand track capabilities such as build, drop etc.
- Use it for selection of the particular parts of the track
- Use it to find similar repetitions in the entire track
- Loop selected part of the track using the waveform

Waveform URL data
Track response contains a link to generated waveform data in JSON format. Each file consists of a single JSON object containing waveform data points and some meta-information used for its generation. Waveform files were generated for the audio files using 8-bit resolution featuring ~ 1600 minimum and maximum value pairs in the resulting waveform data.
You can find more information about waveform data format here.
The shape of the waveform object:
{
"sample_rate":44100,
"samples_per_pixel":7548,
"bits":8,
"length":1601,
"data":
[-7, 6],
"channels":10,
"example_rate":10,
"version":1,
}