To Measure Form Conversions, Count Visits To The Thankyou Page, Not Clicks On ‘Submit’

Here’s a little tip for a/b testing your lead generation forms on landing pages, contact pages, checkout processes etc: count unique visits to the thankyou page as your goal, rather than clicks on the form submission button (regardless of it being labelled ‘submit’ or something better).

Why not count clicks?

There are two reasons:

  1. First, you might overlook counting some conversions.

    How can this goal miscounting happen, technically speaking?

    Some people zip through forms using tab and enter. They can submit the form by selecting the button with tab and then pressing enter on the keyboard. That won’t trigger a click to be measured in your a/b testing tool (I know, I’ve just been doing quality assurance on a contact page’s a/b test). So you’ve gotten a conversion, and it hasn’t been measured.

  2. The second reason is that you might count false conversions as real conversions if you just count clicks. For example, you might count form submissions with bad data, no data or unsuccessful attempts at form submission.

    How can this conversion miscalculation happen, technically speaking?

    Some form error checking methods depend on the data entered being checked after submission, while the thankyou page is loading. So you get the click and measure a conversion, but perhaps the data was bad or lacking and the visitor was just sent back to the form with no email sent to you nor lead entered in your database.
    Similarly, if you’re a bit more advanced and do in-line error checking and user notification, they may not notice the error message yet still click submit. Regardless of which stage the error is caught at – before they click submit or after – you’ve got no conversion to speak of but you’ve counted one.

As an aside, this is why you ALWAYS need to do quality assurance testing on your a/b tests. At the very least, to catch errors in measurement and data collection so that your analysis down the line will make sense.

For more advanced folks, you can use usability testing to avoid testing combinations that will almost certainly lose. E.g. You detect and fix little errors e.g. with clarity or functionality that have a big impact. I once a mistake where I skipped this, thus overlooking a missing phrase. The page didn’t convert at all, when it’s nearly identical counterpart converted visitors into newsletter leads at a conversion rate of 22%. Besides for losing those conversions, my post-a/b-test analysis couldn’t isolate with certainty why the losing page lost. because I had a confounding variable impacting the dependent variable. (In a properly run a/b test, only the independent variable impacts the dependent variable.)

How To Understand Your Audience & Gain Massive Conversion Rate Increases
Get my free Customer Research Secrets whitepaper
We respect your privacy. Your email won't be sold or shared with anyone.

Leave a Reply

Your email address will not be published. Required fields are marked *