Normalisation in next-generation sequencing (NGS) is the process of equalising the concentration of multiple DNA libraries for the purpose of multiplexing. Multiplexing helps maximize the use of expensive NGS technology, enabling parallel sequencing of hundreds, to often thousands of libraries on a single flowcell, thereby driving down per sample costs.Uneven library concentrations from different types and qualities of samples can lead to inconsistencies in data quality. Libraries with a high concentration are likely to be overrepresented on the flowcell while those with low concentration are underrepresented. Overrepresentation isn\u2019t necessarily a problem, likely increasing read depth, though it does waste the run's finite data capacity. Underrepresentation might result in poor read depth and unreliable data, wasting capacity and potentially precious library material. From a cost standpoint, wasted capacity means means additional work time re-preparing libraries, time that could be better spent on downstream analysis or preparing the next batch of libraries. From an application and outcome standpoint, analyses and decisions based on potentially inaccurate or incomplete data will at best confuse research results or lead to repetition of experiments and at worst lead to clinicians missing key information that could assist in a more appropriate treatment avenue.Normalisation addresses these challenges ensuring every library is represented equally and sequenced to sufficient depth.Prior to normalisation, there are several options for quantitating library preps, varying in ease and accuracy.The quickest and most convenient methods (i.e. spectrophotometry-based) tend not to be that accurate.The most accurate methods, like quantitative PCR (qPCR), take time and precision, and rely on knowing the average fragment size in each library for dilution calculations.One crucial factor influencing the accuracy of quantitation and subsequent normalisation is whether the quantitation method can specifically count adaptor-ligated (i.e. amplifiable) double-stranded DNA (dsDNA) molecules. These are the only molecules that will cluster on the flowcell and contribute to sequencing output. Illumina\u2019s best practice suggests using fluorometric or qPCR-based quantitation with most types of genomic DNA libraries.It\u2019s interesting that no single method provides all the data you need with enough accuracy for normalisation. Though fluorometry and qPCR enable the most accurate quantitation, neither can estimate average fragment size. So, it\u2019s often still necessary to check this by electrophoretic analysis (i.e. fragment analysers).However, between fluorometry and qPCR, only the latter can specifically target useful adaptor-ligated molecules by using primers complementary to the adaptor sequences. Quantifying only these viable sequencing templates gives you the best chance at normalising your libraries accurately. Adaptor ligation efficiency can vary between individual samples and batches. It\u2019s reliant on enzymatic reactions that could be affected by impurities and differences in the quality of starting material. So, quantitating with no specificity for adaptor-ligated molecules (fluorometry) means you\u2019re more likely to overestimate the sequencing-competent library concentration and over-dilute. Fluorometry (being cheaper and less accurate) can only substitute qPCR when the starting material and the end repair\/adaptor ligation step of your library prep workflow are of high and consistent quality.This is an open-access protocol distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike.