How Can We Help?

How to Test Your Quality Assurance Scorecards with AI

It's important to know that your scorecard works with Voxjar's AI evaluator before launching an auto QA program.

Voxjar has a built in feedback loop on our scorecard builder, so you can be sure that your scorecard works well before launching.

To test your scorecard with AI all you'll need is a sample call recording.

You do not need to save your scorecard changes before testing! In fact you probably shouldn't. Iterate, test, and confirm success before saving.

For details on how to build a scorecard check out our scorecard building guide.

Transcripts and AI evaluations use AI credits. New accounts start with 24 free credits.

Check out AI credit pricing

Upload a Sample Scorecard

Grab a call recording to upload for testing.

The call recording should be the same type of call that you want to evaluate with this scorecard.

Eg. A customer experience call for your CX scorecard or a sales call for a sales scorecard.

To upload a call, go to the menu at the top right of your scorecard view and click "AI Test"

In the edit panel on the right, upload your call recording.

Voxjar will automatically transcribe this call for you and will send you an email and an in-app notification when the transcript is complete. Transcription is usually done within a few minutes.

Once transcribed, you'll be able to run a test evaluation.

how to upload a sample file for Ai testing

Run an AI Test

test a call monitoring scorecard with AI for auto qa

After your sample call is transcribed, you can use it over and over again.

Open the "AI Test" menu again, click "Run Test" on your sample call, and an AI evaluation will be generated.

This usually only takes a few moments.

These test evaluations aren't saved. They are meant to help you make quick changes and iterate your scorecard.

If you leave the page during an AI evaluation, the evaluation will fail.

Test Individual Questions

test llm prompts before deploying

You can also run tests on individual questions to make sure each question is handled the way you expect.

Click on a questions to open the editor, click "Test This Question" on the top right of the editor, choose your sample call, and wait for the results.

This workflow lets you iterate quickly and ensure that the AI is responded to your questions as expected.

You have full control to guide the AI evaluator. This guide will help you understand how your questions prompt the AI evaluator.

Update/Iterate Your Scorecard

Now you're set up to update your scorecard and test it with the AI evaluator before launching it.

You do not need to save your scorecard changes before running a new test, and will iterate much faster by not saving until you're done or need to step away.

Make changes, test, repeat.

Then when you're satisfied, click save.

Your scorecard is ready for Auto QA!