Ashby Analytics
Analytics: Best Practices
Best Practices: Maintaining Data Quality
22 min
this document reviews how to maintain a high level of data quality in your ats by utilizing ashby reports, dashboards and alerts the options for long term data quality are presented, along with a summary of the most common challenges and a data quality checklist introduction ashby has a standard data quality example dashboard to help get you started if you’d like to see what this looks like, reach out to your dedicated success manager or ashby support historically, maintaining data quality within your ats has been a major challenge due to the lack of visibility into the underlying data ashby’s native reporting capabilities make insight into the quality of your ats data readily available the goal of this article is to first review some of the most common challenges in ats data and present the recommended reporting solutions long term data quality solutions long term health and data quality is fundamentally about data visibility easy review of your data allows you to know when and what data issues are at hand this, in turn, permits designing easy resolution steps and alerts in the absence of data visibility, quality issues are subject to one off discoveries at the time of trying to run a report, with the outcome being a slow accumulation of ever more issues that are hard to identify and resolve, followed by a decrease in trust in the data overall this eventually results in a data less culture, subject to a “garbage in, garbage out” sentiment with ashby you have the following tools to maintain data quality customized reports you can build reports that look for specific compliance or sla violations to surface problematic records immediately scheduled alerts in addition to reviewing reports you can configure automated scheduled alerts that notify responsible individuals immediately or at a set cadence to help raise awareness and maintain quality before any issues grow out of hand dashboards custom reports can be combined into a central data quality dashboard, combining the highest priority items to review home page configuration each user’s home page can be configured with pinned content, adding critical data quality reports and dashboards to their space for easy access and review the above options can be used individually or in combination, depending on the scope of your needs and size of your team in the case of managing a large team, assignment to individual owners can be incorporated to almost all reports and dashboards, permitting a distribution of responsibility this is one way ashby can facilitate creating an ownership culture of data quality common types of data quality issues below is a summary review of the most common data quality challenges we’ve seen stale candidates you can define stale candidates according to your team’s process, but a general example would be categorizing any candidate that has been in process but has not had an activity for 30+ days as stale the easiest way to select for this is by using not logic applied to any activity related field, like so your particular selection may vary, or use other fields of interest or even filter down to specific stages or jobs the key breakthrough is beginning to get an easy selection non compliant records, after which you can begin grouping or refining the results to be more actionable or reveal insights about your process below are some examples of how to further refine your stale candidates to gain insight into what may be happening stale candidates by stage group here you can see stale candidates by their application’s current stage group this reveals that the application review process is a significant contributor, but that there are candidates that have been inactive 30+ days all the way down to the final rounds! screen shot 2022 09 09 at 11 16 01 am png stale candidates on open jobs when grouping by open jobs you may find that there are a small subset of jobs that have a disproportionate number of stale candidates one response here would be to review the hiring team or review capacity and load balancing screen shot 2022 09 09 at 11 18 48 am png stale candidates by source type you can easily review the stale candidate selection by source type too the plan of action may vary here, but seeing that many candidates are stale from direct inbound application may suggest review process adjustments, while noting that 50+ referrals are stale may inform a higher urgency sla process for referrals screen shot 2022 09 09 at 11 20 20 am png stale candidates by recruiter for load balancing and team management, you can review stale candidates by their assigned recruiter this could be filtered down by stage group, to identify in process candidates, or used to identify that most stale candidates have no recruiter assigned at all more on this latter point below screen shot 2022 09 09 at 11 22 20 am png more advanced reporting on stale candidates the above examples are single slices of data, but ashby does permit multiple groupings to give more insight in this example you can see stale candidates by their department and stage group, revealing which departments have the greatest volume but also how far in process the stale candidates are screen shot 2022 09 09 at 11 26 03 am png active candidates on closed jobs another common data quality challenge is finding active candidates on jobs that have been closed these candidates should be dispositioned, but the real goal is to understand why and when this happens this basic filter selection will identify all active applications on closed jobs with this selection in place, similar to the many variations on stale candidate review you can create reports that identify sources, owners or patterns in this case this example report identifies which closed jobs have the most active applications screen shot 2022 09 09 at 11 27 45 am png not scheduling interviews in your ats some teams opt to schedule candidate interviews outside of their ats this is most common at the top of funnel (initial calls, initial screens), but can be a critical set of data to track to see if upstream activity is feeding into downstream results how to assess this will depend on your teams expectations, but the simplest way to get a top level summary is to create a count over time by week of scheduled interviews per stage group this way if you see numbers that are definitively too low you can investigate further screen shot 2022 09 14 at 5 20 56 pm png additionally, you could use the interview load template report, or segmentation reports to look at interview records in more detail see these examples in our report catalogues docid\ aurrqfzh1h8qgf6rh0vqi , docid\ vowdo jjgdtwbzc4lirmc and docid 24ronsz2xx5tozdsakiq9 missing fields across candidate, jobs, applications and all the other ats based records, there is a ton of information to manage it’s not uncommon to see candidates with no recruiter assigned, or missing custom field values ashby can facilitate managing this by identifying non compliant records easily via filters the general recommendation here is to use filter selections for “is empty/unassigned” to identify records with no value in this example we can select for all applications that have no source and group by the responsible recruiter to distribute responsibility for updating the application records screen shot 2022 09 09 at 11 35 52 am png common missing fields to review the most common fields your team may want to establish data quality reporting for are the following source credited to recruiter (on the job and on the candidate) hiring manager department / team your team’s process may vary on some of these fields being present, so it may make sense to add an additional filter for example, you may want to review data quality specifically for hired candidates more carefully this would involve checking offer letter details, hiring team information and so on here we see a list of hires with no hiring manager on record, by department screen shot 2022 09 09 at 11 44 01 am png ownership on related records your ats may permit assigning an owner to related records, such as a job’s recruiter or a candidate’s recruiter this ownership can often diverge, creating uncertainty job’s recruiter & candidate’s recruiter ashby permits easily reporting on the fields and ownership associated with either the job or a candidate’s application at the bare minimum, your team may want to maintain awareness by reporting on both, but a diagnostic report can be made that groups all application’s by their job’s owner first with a second grouping on the application’s owner grouping by the job’s recruiter and the application’s recruiter allows you to see the mix of ownership the results, after some additional filter options are added per your need, may look like this screen shot 2022 09 09 at 12 04 18 pm png in this view, we see chastity okuneva (bright pink) has ownership over almost all applications on job’s owned by other recruiters conversely, tevin (teal), is the owner of 96% of their job’s applicants opening, requisition and job metadata your ats may have a complex data model that permits setting fields such as departments or hiring managers on multiple distinct objects, e g the job posting may have a department while the associated requisition may have a different department similar to the above example with recruiter ownership on jobs and applications, you should consider this across other objects where it may apply if you’re a lever based customer, see our writeup on docid\ n9itxse3mud6j4kpg1ext for a detailed summary of how fields can differ across lever objects putting it all together dashboards, alerts, and home page configuration once you’ve identified the critical reports of interest to your team, the recommendation is to compose your reports into respective dashboards some data quality issues are best dealt with immediately or at a weekly review, which is a perfect fit for ashby’s alerts in addition, once you’ve setup your reports and dashboards, you’ll want to ensure they are pinned to your team’s home page for easy access the below references cover all of these topics in greater detail docid\ fckr kkhnmjz6vuu3krvw docid\ q7a7cpfpyulmt6bd1qvym docid\ fr6mwietf wlzag sl8xr the data quality checklist at a high level the above examples can be summarized in a check list you may want to review with your team based on your priorities you may not want to address all of these at once define a general “stale candidate” condition report on stale candidates by report on active candidates on closed jobs define mandatory fields for all applications (recruiter, source, hm, etc) are there stage specific sla’s for time in process? review your object ownership fields create an overall data quality dashboard(s) with the above reports pin the dashboard or individual reports to appropriate user home pages celebrate and rest well knowing data quality is under control with a stable process 🥳