How Can We Help?

Upload and Evaluate Calls with Adhoc Queues 

Upload call recordings and assign them to Voxjar's AI evaluator, or to your QA team, whenever you need to with Adhoc Queues.

The queue builder lets you define rules, schedules, and data sources to initiate call evaluations.

To create a new Adhoc Queue, go to Queues (, click "Create a Queue", and select "Adhoc Queue".

Adhoc Queues only run one time, on creation. You can create an Auto queue to have your call recordings uploaded and evaluated automatically on a schedule.

Launch the adhoc queue builder in voxjar for call recording assignment

Data Sources

select call recording source for adhoc queue in voxjar

Adhoc queues can be created from many data sources.

  • existing call uploads in Voxjar
  • integrations with your telephony platform or cloud storage
  • a list of download urls
  • by manually uploading call recordings.

Existing Interactions

You can re-evaluate interactions that have already been uploaded to Voxjar. When you choose this option you will be able to select which interactions to evaluate with general filters or by selecting the interaction Ids of the interactions to evaluate.


Your queue can pull call recordings from one of your integrations. When you choose this option you will be able to select which interactions to evaluate with general filters or by importing a list the ids of the interactions to evaluate.

Download URLs

You can also provide a list of public urls that Voxjar can use to download your call recordings.

You'll need to provide them in a CSV file that contains the column name "urls". You can download a CSV template from upload page. 

option to upload a list of download urls for call recordings

Upload Files

You can also just upload the files directly from your browser. 

This is best when you have small batches of calls and enough time to wait for them to process. If you try to upload too many files your browser might crash. 

If you close the page before uploading is complete your uploads not finish.

option to upload a list of call recordings files

Data Collection Filters

You can use Queue filters to identify which calls should be assigned for evaluation when your data source is either Existing Interactions or an Integration.

Voxjar collects metadata from your integrations on the initial connection and on every run to keep your Queue filters up to date.

Standard Voxjar Queue filters that will always be selectable:

  1. Call duration
  2. Call direction
  3. Agents

The remaining filters are synced from your integration. If you don't see any additional filters be sure that your integration has completed syncing (viewed on the integration in settings)

If your integration is a cloud storage provider or FTP, then you'll need to make sure that your metadata fields are mapped correctly. Right now only text-based custom metadata from cloud storage or FTP will become a Queue filter. This will expand in the future.

auto queue filters for selecting call recordings

Data Collection Rules

After you've set your filters or imported your data, you create rules for the Queue. These rules help you set guardrails to prevent uploading too many calls and to ensure Voxjar is sampling a good distribution of data.

Some data sources will not support every rule.

There are four possible rules for Adhoc Queues:

  1. Choose how far back to pull calls
  2. Set call distribution
  3. Set your sample size
  4. Set a deadline for evaluation

Data Collection Window

Set how far back you want Voxjar to look for call recordings and metadata.

You usually do not want to look back farther than your schedule or you risk duplicate call evaluations.

The farther back you look the longer it will take Voxjar to collect data. At the moment we have a one hour time limit for data collection. If your queue fails, it is often because the window is too large and there is too much data to parse through in that time period.

data collection window for downloading call recordings for quality assurance

Data Distribution

By default Voxjar will randomly sample your dataset to distribute the call evaluations fairly. 

You can also choose to sample a set number of calls per agent.

For this to work your integration must identify agents in the metadata. You can confirm this when setting your queue filters - click on "select agents". (Be sure to reset your agent filter after checking)

select how the call recording data set should be distributed

Sample Size

set the sample size of call recordings to be downloaded

Select a max sample size so Voxjar does not pull more calls than you want.

Your max sample size will always be respected. It sets the ultimate ceiling on data collection.

Sample size can not currently exceed 1,000 calls.

This helps ensure that the queue completes within a one hour time limit and helps protect against accidental AI credit overuse and flooding your human QA team with too many evaluation requests.

Evaluation Window

An evaluation deadline sets a target completion time window for each evaluation assigned by the queue.

This is mostly useful for manual QA. The AI evaluator is automatically queued for the soonest possible completion time and is usually done within a minute of being assigned an evaluation.

After you queue is created, you'll see how many evaluations are past due on the queue list

Set the deadline for QA scores from human evaluators

Assign Evaluators

After setting your rules you'll be asked to assign calls to either Voxjar's AI evaluator or to your human QA team members with a scorecard of your choice.

When you're finished, the Adhoc Queue will be run immediately and you will automatically have evaluations assigned to your team or automatically scored by Voxjar's AI evaluator.

Ai Evaluator

Voxjar's AI evaluator can automatically evaluate the calls from your Adhoc Queue with scalable, high quality responses.

Our AI is built around the leading large language models in the market.

That means that you do not need to write custom queries or keyword searches for the AI evaluator.

It is automatically compatible with all scorecards created in Voxjar. So you can use the same scorecards with a human team and the AI.

The AI evaluator operates on a credit system. Every 5 minutes of audio reviewed uses 1 credit. 

ai evaluator selection

Human Evaluators

You can also assign evaluations to a human QA team.

Invite evaluators to your team from the team page.

Once invited, you can select them to review calls from your queues.

Evaluations are assigned round robin.

If you select "All Evaluators" the assignments will only include users with the role of "Evaluator". The selection list will include all "Admin" and "Evaluator" users.

Evaluations will be added to the user's Work Queue for them to evaluate in a designated workflow.

human QA evaluation

Scorecard Assignment

The AI evaluator requires a scorecard assignment to operate.

We suggest assigning a specific scorecard to human evaluators, too.

Any scorecard created in Voxjar is compatible with the Ai evaluator and your human QA team's Work Queue workflow.


scorecard selector