All posts by webprofits

Ptychography live cell imaging

An Introduction to the Microscopy Facility Toolkit

Building a Microscopy facility must be a daunting task, with such an array of imaging technologies and applications. Gone are the days of the humble compound microscope whose predecessor illuminated the ‘cell’ as coined by Robert Hooke1. I can only imagine the excitement of such a discovery and ponder how many times microscopy has yielded such emotion since then. Microscopy unlocks a window into a wondrous world, there is no debate around its value, just what is the most useful. 

It is overly simplistic to consider that there need only a few technologies to fulfil the gambit of application needs. As with most things, it’s a function of ‘horses for courses’ as one technology is great at specific functions though not so fantastic at others. As technology marches on, the shiny new ‘must have’ attracts the researcher desperate to publish novel findings elucidated by a magical black box developed by a clever physicist. This hunger is fed by the perennial ‘publish or perish’ mantra exacerbated by the anecdotal tendency for novel technology to gain favourable acceptance by journals. The issue of competing interests should not be overlooked.

The Arms Race

The arms race in microscopy is not just driven by the researcher. Consider the microscope manufacturers desire, a FOMO, adding their flair to a particular development they have not invented but want to have a me-too with added ‘bells and whistles’ to gain the upper hand in a sale and capitalising on brand loyalties. Super-resolution microscopy is a great example of this, by delivering optical images with spatial resolutions below the diffraction limit, several super-resolution fluorescence microscopy techniques have opened new opportunities to study biological structures with details approaching molecular structure sizes2 – note STED, SSIM , PALM, STORM and RESOLFT. Bending the frontier delving beyond what was thought impossible. Beautiful discoveries, however, it still has limitations in capabilities. 

A divergence is occurring, a whole new world of computational microscopy is emerging to enhance the image we no longer see. This concept is spawning a plethora of variations on the theme in the race to the ultimate in resolution. Whilst many are all-consumed by this resolution race, others are inspired by not just the individual, but the population – the company they keep in equal measure. We can all plead guilty to only seeing the majority and overlook the outlier – visualise a cluster in a flow cytometry experiment and consider the gating applied to the data along with the premise of these actions. This may well be the cell that is going to kill the patient. 

Why bother?

Perhaps it is time we consider a better way of investigating the behaviour of live cells – a novel method of looking at the cells in question that has a very large field of view to enhance the statistics, a way to view them for extended time frames, even weeks, without perturbing, killing, or bleaching. A gentle nourishing environment to keep cells happy and label-free is ideal. You may be thinking – Why bother given there are so many options for microscopes? In short, signal to noise ratio is limited when using standard microscopy methods. While the ubiquitous use of fluorescence may seem to have solved this, new data is emerging that, arguably and at times unknowingly, has exposed fluorescence as a major influencer of cell behaviours itself. Rarely can you gain such powerful insight with incredible single cell tracking and metrics right through to entire populations using Fluorescence alone. 

How does Ptychography work? 

Ptychography is not only an unusual name, but an unusual technology. It is a hybrid of sorts, a blend between an optical microscope and a light scattering detector. Briefly, the optical microscope sends light through a sample and a scatter or diffraction pattern is received on the detector, a digital method is used to translate the diffraction pattern into a quantitative image this is the Phasefocus virtual lens. The patterns are processed by the Ptychography algorithm to produce quantitative intensity and phase images of the sample. To explore this further please click here3.

The resultant is a dramatic increase in the Signal: Noise giving the opportunity to avoid fluorescent probes, enabling a very low intensity light source avoiding cellular damage. This is a big win for the cells and the researcher trying to see them in an environment as natural as possible.

A Bitter Pill

It is tough to acknowledge that the core method of Fluorescence employed by so many in cell biology may be delivering flawed results. Several studies run on cells in parallel with and without a fluorescent label have shown profound changes to behaviours and other indicators such as proliferation, motility, and dry mass, prompting the question – Why aren’t more facilities using this technology if only for a confirmatory application notwithstanding the enormity of insight it offers beyond this? One could postulate it may be because users are blinded by resolution.

What am I likely to see in a facility?

Summarising the breadth of microscopes with their strengths and weaknesses is not a simple task and in doing so assumptions must be made along with the lens we look through. The image below gives a summary, but more importantly, it shows where Livecyte fits into the scheme of a broader microscopy facility. It is worth noting, Livecyte does not dispense with any, it enhances the offering within a facility. 

To gain a little insight into cells doing weird things I encourage you to link up with the Phasefocus Twitter feed https://twitter.com/PhaseFocus1

Further to this, please contact us at ATA Scientific. We will be happy to introduce you to some systems, perhaps arrange a demonstration. Call us on +61 2 9541 3500 or send an email to pdavis@atascientific.com.au

References

  1. https://www.nationalgeographic.org/encyclopedia/cell-theory/ site accessed 1Feb2022
  2. Godin AG, Lounis B, Cognet L. Super-resolution microscopy approaches for live cell imaging. Biophys J. 2014;107(8):1777-1784. doi:10.1016/j.bpj.2014.08.028
  3. The Virtual Lens, Phasefocus. https://www.phasefocus.com/technology Website accessed 7 Feb2022.

Characterising Lipid Nanoparticles for Vaccine Development

Lipid nanoparticles (LNPs) either loaded with nucleic acids or as liposomes containing an aqueous core, have received great interest from pharma as delivery vehicles for different therapeutic treatments for many different reasons.LNPs offer improved stability and delivery efficiency by protecting drug molecules from degradation by the body’s natural immune processes. Moreover, the LNP can be specifically targeted using customised ligands attached to its surface.

The breakthrough of mRNA-based vaccines

The fast pace of progress in mRNA vaccines (e.g. for COVID-19) would not have been possible without major recent advances in RNA encapsulation and delivery methods. Recent breakthroughs with mRNA-based highlight the potential of lipid-based particles as powerful and versatile delivery vectors for vaccines and gene therapies, to treat previously untreatable diseases. Extensive basic research into RNA and lipid and polymer biochemistry has made it possible to translate mRNA vaccines into clinical trials and has led to an astonishing pace of global vaccination. 

LNPs have been found to be the most effective mRNA formulation/delivery approach and function to protect the mRNA from degradation when injected into the patient and to promote entry of the mRNA into cells. LNPs typically consist of four components: an ionizable cationic lipid, which promotes self-assembly into virus-sized (~100 nm) particles and enables endosomal release of mRNA to the cytoplasm; lipid-linked polyethylene glycol (PEG), which increases the half-life of formulations; cholesterol, a stabilising agent; and naturally occurring phospholipids, which support the lipid bilayer structure.Inactive ingredients such as salts, sugars, and stabilizing acids are added to achieve formulation stability during transport and storage.

Analytical characterisation of these nanoparticles is critical to drug design, formulation development, understanding in vivo performance, as well as quality control during formulation and manufacture. The use of ever-more structurally complex molecules warrants a growing requirement for complementary and orthogonal analytics to ensure data quality and the reliability of research. 

How does Malvern Panalytical contribute to the characterisation of lipid nanoparticles

When developing LNP drug candidates, several critical quality attributes (CQAs) need to be addressed. Nanoparticle characterisation CQAs for particle size, particle size distribution and concentration can be measured using Dynamic Light Scattering (DLS) and Nanoparticle Tracking Analysis (NTA). These techniques are both orthogonal and complimentary in addressing this. Surface charge, another CQA can be probed using Electrophoretic Light Scattering (ELS), a measure of particles colloidal stability CQA.

Particle size and stability using light scattering techniques

Light scattering techniques are used extensively in the characterisation of lipid nanoparticle and liposome research to measure particle size, stability, zeta potential and particle concentration. The Zetasizer range of light scattering instruments can be used to optimise lipid-based formulations and process conditions, such as monitoring stability, understanding surface modification, and developing formulations. Non-invasive backscatter (NIBS) optics enable reliable measurements of concentrated, turbid samples without the need for dilution. Delivering data in a short time frame, the Zetasizer allows users to implement this technique throughout the development pipeline. 

Aggregation/ encapsulation efficiency using Nanoparticle Tracking Analysis (NTA)

NTA allows you to visualise and size individual particles in the preparation, generating important information about nano-particle content. For instance, the presence of larger particles could represent either non-viral cell debris from the cell culture process or aggregates of virus particles containing many individual virions. In either case, such aggregates/contaminants represent a possible problem to the manufacturer. NanoSight helps vaccine developers devise a solution.

Biomolecular interactions using Isothermal Titration (ITC) and Differential Scanning Calorimetry (DSC)

Detailed characterisation of protein structure enables understanding of protein function. It is therefore among the activities central to academic and industrial research and development. ITC is used in quantitative studies of a range of biomolecular interactions, that work by directly measuring heat that is either released or absorbed during a biomolecular binding event. By providing a complete thermodynamic profile of the molecular interaction, ITC can explain mechanisms’ underlying interactions, and enable more confident decision-making in hit selection and lead optimisation.

DSC is used to characterise stability of a protein or other biomolecules directly in its native form and achieves this by measuring heat change associated with the molecule’s thermal denaturation. Precise and high-quality data obtained from DSC provides vital information on protein stability in process development, and in the formulation of potential therapeutic candidates. The ‘first principle’ nature and high resolution of DSC makes it a well-established technique for extended structural characterisation and stability profiling of biomolecules and viruses in solution. Due to its direct readout, broad temperature range and sensitivity to thermally-induced unfolding, DSC is also used as the gold standard technique for validation of data from higher throughput thermal stability assays.

How does the RedShiftBio (MMS technology) contribute to the characterisation of lipid nanoparticles

The nature and composition of a vaccine makes them inherently difficult to characterise. The active ingredient such as a toxoid or protein subunit is often in very low concentration. The adjuvant, such as alum, can often be in quite high concentration relative to the biologic, and its particle size can range from nanometers, to microns. Adding preservatives and antibiotics to the mix, the result is a material with such diverse properties that many typical analytical tools struggle to measure the formulated product.

Microfluidic Modulation Spectroscopy (MMS) combines mid-infrared laser spectroscopy with microfluidics to overcome many of the limitations of traditional spectroscopy-based technologies. Its ability to provide multiple attribute data reduces or eliminates the need for performing separate measurements across different tools. MMS offers ultrasensitive, highly reproducible, automated structural measurements of protein aggregation, quantitation, stability, structure and similarity – measurements that underpin drug safety and efficacy. This novel method can be used to examine protein secondary structure for a wide range of applications from mAb-based biotherapeutics to robust measurements of ADCs, AAVs, and mRNA. It provides direct, label-free measurements over the concentration range <0.1 mg/ml to >200 mg/ml. Real-time background subtraction eliminates need to dialyse samples. Even at low concentrations the AQS3Pro system allows detection of <2% change in secondary structure. 

How does Fluidity One (MDS Technology) contribute to the characterisation of lipid nanoparticles

Characterising membrane proteins and their interactions with lipids remains a major challenge. Traditional methods can involve use of detergents which often cause the loss of native lipids surrounding membrane proteins, which ultimately impacts structural and functional properties. Microfluidic Diffusional Sizing (MDS) is a new method that can used to determine protein size and concentration of protein samples for quality control purposes through laminar flow diffusion. MDS can also be applied to evaluate protein/ligand and protein/lipid interactions. This method enables users to detect virtually any primary amine-containing molecule, emphasising the versatility of the technique.

The Fluidity One-W uses MDS technology to study protein complexes and their formation in crude biological backgrounds such as cell lysates or blood plasma. Using small sample volumes (<10 μL) and in a relative short time (t < 15 min) MDS can be used to determine the diameter of lipid particles with high precision. These particles can then be confirmed by supportive DLS data using the Malvern Zetasizer system.

We can help manufacture your success 

ATA Scientific provides a range of physicochemical characterisation tools that are used from the initial characterisation of biological materials through to final manufacturing and quality control, and which deliver information essential to ensuring the stability and efficacy of the vaccine product.

Contact us for more information on the latest analytical technologies from Malvern Panalytical (DLS, NTA, ITC/DSC), Redshiftbio (MMS technology) and Fluidity One W (MDS technology).

The Complexity of the Macrophage Immune Response & How To Solve It

Livecyte combines high contrast label-free imaging with correlative fluorescence and automatic cell tracking algorithms to provide a complete solution for quantifying phagocyte behaviour down to the single-cell level. This enables users to: 

  • Automatically quantify macrophage phenotypic behaviour label-free to measure immune response
  • Quantification to the single-cell level gives a more accurate measure of phagocytosis than standard population-level analyses
  • Increased accuracy reveals subtle behaviour changes from how phagocyte activity changes over time to saturation

Recently Phasefocus have turned their research to Macrophages, the heroes of our immune response that are commonly misunderstood. Livecyte has uncovered some interesting phenotypes that were in the realms of theory before now! 

What is the macrophage immune response?

Macrophages are effector cells of the innate immune system that phagocytose bacteria and secrete both pro-inflammatory and antimicrobial mediators(1). In addition, macrophages play an important role in eliminating diseased and damaged cells through their programmed cell death. Generally, macrophages ingest and degrade dead cells, debris, tumour cells, and foreign materials, they exist in multiple states of readiness. 

In tissues, they are typically in a “resting” state where they slowly proliferate and clear up debris. 

Macrophage activation occurs when these resting macrophages receive chemical signals which alert them to the proximity of invaders. For example, Lipopolysaccharide (LPS), a component of the outer cell membrane of Gram-negative bacteria such as E. Coli, can be shed by these bacteria, and bind to receptors on the surface of macrophages. In response, these macrophages become activated and produce inflammatory cytokines, reactive oxygen and nitrogen species and begin phagocytosing the foreign bodies. 

What’s the problem we are trying to solve? 

Macrophage immune response is a complex, multifaceted process involving changes to many aspects of macrophage phenotypic behaviour. Current leading applications provide only a population level analysis, using fluorescence expression.

Population level analyses that, for example, look at total fluorescence or pixel area of expression over simplify the response and completely miss much of the richness of phenotypic response. They are also vulnerable to confounding effects from cell proliferation and seeding density variation.

How does Livecyte solve this? 

Livecyte solves this by measuring immune response on a single cell level. This provides a new level of accuracy in phagocytosis quantification and preserves heterogeneity of response. It also brings all the standard benefits of Livecyte to the application such as reduced phototoxicity and measurement of the full breath of phagocyte phenotypic response enabling the correlation of these behaviours and new scientific insights.

In a recent application Note AN018 Macrophage Phagocytosis of Bioparticles, Phasefocus examined the effects upon treating RAW cells with cytochalasin D, an actin inhibitor. Livecyte was used to detect proliferative, and morphological changes to macrophages corresponding with phagocytic activity. Using quantitative, correlative fluorescence detection Livecyte measured changes in rates of phagocytosis of pHrodo Green E. coli Bioparticles.

In this study, the Livecyte Fluorescence Dashboard showed a dose-dependent reduction in the median fluorescence intensity of the cells, therefore reduction in cell phagocytosis, with cytochalasin D treatment. Livecyte’s high content data generation enabled time-lapse data for individual cells to be observed, giving a far more in-depth analysis of an experiment than end point, or total population metrics. Livecyte can observe individual cell behaviour, and therefore can measure fluorescence on a single-cell basis, thus removing assay sensitivity to initial cell count and allowing individual macrophage phagocytosis to be measured independently from cellular proliferation(3).

To summarise the findings of the application AN017 Quantifying Macrophage Activation and Proliferation, Phasefocus characterised subtle changes in macrophage phenotype in response to inflammatory stimuli. Through analysing cell count and cell dry mass values they were able to identify proliferation and growth of cells independently.  A prioritisation towards proinflammatory signalling and growth was identified with an inhibition of the proliferation pathway in cells treated with LPS. This is known to occur in activated macrophages to meet the demands of cell growth and production of bactericidal factors. This was exacerbated with the addition of pro-inflammatory cytokine IFNγ suggesting a synergistic effect of both these pro-inflammatory mediators(2).

In this Application Note, AN019 – Macrophage Phagocytosis of Apoptotic Cells, Livecyte enabled users to reliably investigate time-sensitive changes of immune cells in response to target cells to enhance our understanding of efferocytosis regulation. From a single experiment, it was possible to generate a host of quantitative insights to both substantiate and validate existing data as well as bring new facets to investigate the complex mechanisms that make up a biological response. Livecyte is a crucial tool in bringing a new dimension to traditional in vitro immune assays, advancing the knowledge, and understanding of these pathways(4).

Streamline your work

To truly appreciate the power of the Livecyte, we encourage a much closer scrutiny of these and other applications as you would any scientific paper.

Take a look at what Greg Perry, Image Resource Facility Microscopy Manager at St George’s University of London, has to say. “One professor has been using the Livecyte on a daily basis and his work flow has become so streamlined that he has managed to get a few months’ worth of work done in just one.”

ATA Scientific manages Livecyte throughout Australia and New Zealand. We can arrange virtual and live demonstrations of this amazing microscope. The Livecyte really is the solution to the problem you never knew you had, uncovering novel discoveries using a platform that has transformed other fields, winning multiple awards and breaking Guinness records.  Your research is far too important to miss out on Livecyte, to paraphrase Dr Peter O’Toole of York University – “every microscopy facility should have a Livecyte”.  Contact Peter Davis at ATA Scientific for further information: pdavis@atascientific.com.au.

References

1.       Hirayama D, Iida T, Nakase H. The Phagocytic Function of Macrophage-Enforcing Innate Immunity and Tissue Homeostasis. Int J Mol Sci. 2017;19(1):92. Published 2017 Dec 29. doi:10.3390/ijms19010092

2.       Phasefocus – AN017 – Quantifying Macrophage Activation and Proliferation https://www.phasefocus.com/resources/app-notes

3.       Phasefocus – AN018 – Macrophage Phagocytosis of Bioparticles https://www.phasefocus.com/resources/app-notes

4.       Phasefocus – AN019 – Macrophage Phagocytosis of Apoptotic Cells https://www.phasefocus.com/resources/app-notes

Changing the Way We Measure ‘Undetectable’ Differences in Protein Structure

How do you resolve an issue in a product when your analytical methods fail to indicate any difference between batches? Consider the ramifications should your pharmaceutical product cause harm despite your best efforts of your procedures, not to mention the need to analyse in non-native conditions. Most basic assessments cannot delineate conformational effects from colloidal effects . 

Making complex processes simple

Biophysical characterisation of proteins can be a daunting task, given the number of methods required to elucidate the results for Aggregation, Quantitation, Stability, Similarity and Structure. RedShift Bio’s AQS3 pro, powered by Microfluidic Modulation Spectroscopy (MMS), can perform these 5 measurements in a single analysis which is impressive. Sensitivity is improved up to 30 times that of conventional FT/IR instrumentation with a desirable concentration range of 0.1mg/ml to over 200mg/ml and with protein analytics that is equally enviable to the operator. Savings in sample testing time can be more than 80% thanks to the fully automated multi-sample capability.

Often the buffers used in a formulation are not compatible for the analytical method, as seen with spectropolarimetry. The AQS3 pro experiences no interference from excipients in the buffer. This is a game changer, to measure at concentration, and in the final drug conditions, removing the guesswork from formulation and de-risking many steps.

What is the magic behind the AQS3 Pro system?

There are 3 key components in the AQS3 pro that allow the system to achieve all this and separates it from all other systems,

  1. a mid IR tuneable quantum cascade laser
  2. a thermal, electrically cooled detector
  3. a Y shaped microfluidic transmission cell. 

The Tuneable laser provides an optical signal almost 100X brighter than the conventional light source used in FT/IR allowing the use of a simple thermal electrically cooled detector without the need for liquid nitrogen cooling. Given the intensity of the laser, the system is amenable to low concentration samples as low 0.1mg/ml. The measurement differs from the conventional as well. The sample and the reference stream are injected alternately through the Y shaped microfluidic cell passing through the observation zone. Alternating at a rate of 1 – 5hz, the absorbance of the reference and sample are measured almost simultaneously allowing the reference absorbance to be subtracted from the sample absorbance in real time resulting in the collection of reference corrected absorbance spectra. Such a real time buffer subtraction and auto-referencing greatly enhances the sensitivity method and produces an almost drift free signal. 

These innovations create the 30X sensitivity boost. The speed of the AQS3 pro impresses. Where it may take 30 mins per sample for the FT/IR, the AQS3Pro system with MMS technology can measure samples in less than 1.5min. The AQS3 delta analytics packages are refreshingly simple and intuitive, applying advanced analytical tools enabling its use in a vast range of applications across the industry.

What does the AQS3 pro actually measure?

This is an infrared system that measures the Amide 1 band of a protein which is in the wavenumber range of about 1580 – 1720. It takes about 32 data points across the range alternating between buffer and sample, plotting a differential absorbance which is interpolated into a spectrum, which in turn is converted to a second derivative affording fine detail changes from spectrum to spectrum to be elucidated. Flipping the spectra shows the baselines and area of overlap plot enables you to look at similarity and trends between spectra. This can then be deconvoluted using a normal gaussian deconvolution, giving rise to secondary structure motifs, the percentage of the components such as α-Helix, β-Sheet, and Anti-parallel β-Sheet Higher Order Structures (HOS).

Case study: complexity of a vaccine.

The diverse composition of a vaccine makes biophysical characterisation challenging. Not only does a vaccine have antigens and antibodies, they can also have an array of excipients such as preservatives, stabilisers, and buffers plus they can contain antibiotics. The concentrations of the components vary by orders of magnitude and the particle size distribution of a final formulation can range from nano to micron. 

The additional ingredients can introduce protein and non-protein, organic and inorganic materials. If you add up the numbers of tests and instrumentation required to perform these tests it can be incredibly diverse. MMS is a single technology that provides unique insights into many of the parameters required to fully understand these biophysical properties of the sample, from looking at interaction effects of antigen and adjuvant or the effect of looking at varying stabiliser concentrations or altering the buffer pH. All these steps can be performed to understand their effect on the protein secondary structure.

The total characterisation of a protein-based vaccine should include:

Biophysical Characterisation to understand Protein antigen properties such as pH, ionic strength, HOS, and aggregation propensity.

Stabiliser Screening These convey stability by addition of amino acids, surfactants, proteins, sugars and antioxidants to improve shelf life.

Adjuvant Screening Adjuvant surface chemistry (eg alum) as well as adjuvant-antigen interactions should be characterised.

Process Design & Control Process design and vaccine-surface interactions of process equipment should be understood – ie protein loss on filtration substrates as well as denaturation of protein samples.

Stability Studies Real time and accelerated stability studies ensure antigen is in chemically and physically stable state

With this in mind, let’s look at the tools and parameters needed for such biophysical characterisation of protein antigens; from molecular weight in a chromatography system to surface charge on an electrophoretic light scattering system. A few are listed below.

LC – HPLC and SEC Liquid Chromatography in the form of reverse phase, ion exchange, size exclusion can be used to assess chemical and physical stability.

DSC- Differential scanning Calorimetry Thermal stability properties of antigen for insights into formulation conditions.

MALDI – Maldi TOF  High resolution molecular weight providing information about primary structure and post translational modifications.

ELS – Electrophoretic Light Scattering – Surface charge of pure adjuvant versus adjuvant in formulation will identify protein adjuvant interactions.

DLS – Dynamic Light Scattering Particle size distribution of antigen, adjuvant and complex mixtures, Colloidal stability parameters KD and B22

It is important to note – all these parameters need to be measured at some point during development of a protein vaccine.

Where does MMS fit into the development of vaccines? 

Given MMS measures protein secondary structure, we can leverage this capability to measure in final formulation conditions to assess HOS as stabilisers are varied to determine if they impart a stabilising effect and prevent aggregation processes. The AQS3 pro could be used for adjuvant screening, investigating the interaction and to determine if it is damaging to the antigen. Process design can be studied to establish, for example, if a filtration membrane causes a negative effect on the structure of the antigen, or to understand the stability of the formulation using long-term stability studies. It is now clear that MMS can be applied across the entire development pipeline with a wide range of measurements to help understand the full properties of protein-based vaccines.

De-Risk Drug Development

MMS’s sensitive, accurate measurements coupled with a robust data analysis package provides simple, accessible, and reliable results to de-risk and accelerate drug development workflows. The AQS3 pro can identify at-risk candidates far earlier than traditional methods, clearly saving time and resources.

ATA Scientific are proud to have the RedShiftBio AQS3 pro within our suite of instrumentation. Should you require further information on the AQS3 pro or indeed many of the techniques cited above, please do not hesitate to contact us.

The Role of Lipid Nanoparticles in Vaccine Development

The recent approval of COVID vaccines from Moderna and Pfizer based on mRNA-containing lipid nanoparticles (LNPs) has propelled this pioneering technology, shifting it from being simply viewed as speculative research to becoming transformative in the area of genetic medicines. The pharmaceutical world has seen vaccine development experience a sharp jolt, evolving from the 1950’s concept of one egg, one vaccine dose and the bulk cellular expansion in bioreactors, to a highly efficient and timely manufacturing protocol. The advent of mRNA containing LNPs has enabled a highly effective new vaccine platform, but at the same time has raised many questions.

Why has this new technology changed our dependence on cell cultured vaccines? Will the NanoAssemblr democratise vaccine production globally?

The 4 pillars of RNA vaccine development

Classically there are four pillars of vaccine development, individually they are inconsequential; but together they can be a formidable assault on pathogenic viral invaders! These pillars are Antigens, Vectors, Delivery-Nanoparticles, and Manufacturing. Each pillar has its set of design and development considerations and associated technologies.

To streamline understanding of each pillar it is important to explore these further details; an Antigen is what the body defences identify as foreign and will attack. In the case of COVID much of the focus has been on the characteristic spike protein. Some of the vaccines are designed to create a copy of this passive component of the virus to educate the immune system to respond by producing antibodies. A viral vector is generally a harmless virus (often an Adenovirus) that is hijacked to carry the genetic code of interest (ie the spike protein) into the cells, like a letter is placed into an envelope to be posted. In this discussion consider the Vector as a replicon virus such as an Alphavirus where the structural protein code is substituted for the Gene Of Interest (GOI), being the spike protein, but retaining the four non-specific proteins (NSP) that work together to actively replicate the GOI once in the ribosome – ultimately producing a huge amount of antigens, a self-amplified RNA 1. A bit of RNA code is quite vulnerable, your body is trained to destroy free floating RNA, so it needs to catch an uber to enter the body and ultimately the cells. This uber needs to be able to navigate around our defences. To do this, a suitable camouflage is needed.

A Lipid Nanoparticle (LNP) is perfect to encapsulate the code thereby becoming the delivery pillar and it has huge scalable potential for manufacturing, which is the next pillar. Whilst it is a challenge to bring all these components together, the ultimate challenge is to manufacture to scale. Translating research to usable medicine is often a bottleneck and many candidates fall over at this stage. It is exceptionally important to de-risk the process well before this stage. This is where the Precision Nanosystems platform products – the NanoAssemblr range – have had a huge impact in translating research to the clinic.

The anatomy of lipid nanoparticles (LNP)

Often the terms Liposome and LNP are used interchangeably, however, whilst they are similar in many ways, there are distinct differences in their function and structure. Consider a Liposome is made up of a Lipid bilayer primarily composed of amphipathic phospholipid enclosing an interior aqueous space, it can be decorated with a protein adding targeted delivery to its capabilities. LNPs can take on a variety of forms enhancing their ability to encapsulate an assortment of cargoes like peptides, genetic payloads like siRNA, mRNA and saRNA plus other small molecules.

The most exciting of these are those formulated with ionisable cationic lipids.  Recently research from Meng and Grimm proposes LNPs composed of the best-performing iPhos and different helper lipids—zwitterionic lipids, ionizable cationic lipids and permanently cationic lipids—achieved selective organ targeting (SORT) and organ-specific CRISPR-Cas9 gene editing in spleen, liver, and lungs of mice, respectively3 .

Mechanisms of LNP action and the role of different lipid components

The LNP structure paves the way for a nanoprecipitation method for their creation.  Precision Nanosystems employ a microfluidic architecture that allows two streams to mix under laminar flow conditions enabling the process to be reproducible and scalable. Importantly this mixing rate is rapid, <1 millisecond adhering to the tenent ‘the mixing rate should be faster than assembly rate’.

Consider one of these streams is aqueous, a buffer, such as PBS to maintain the pH whilst carrying the payload. The other stream has the lipid components dissolved in a solvent such as ethanol. Controlling the flow rate ratios and the total flow rate will vary the nanoparticle size. A LNP is not a solid sealed system like a rubber ball, instead it is a collection of elements that are bound by charge. In a mRNA example, the cationic lipids meet the negative charges of the phosphate backbone of the mRNA. Then the neutral helper lipids such as zwitterionic lipids stabilise the lipid bilayer of the LNP. This enhances delivery efficiency, and thanks to the aqueous environment, polyethylene glycol (PEG)-lipid forms the outer shell pointing it’s hydrophobic tails inwards. These improve the colloidal stability in biological environments by reducing specific absorption of plasma proteins and forming a hydration layer over the nanoparticles4. The result is unilamellar, predictably sized, uniform LNPs created in a matter of seconds. 

Precision Nanosystems’ NanoAssemblr series facilitates the encapsulation of genetic material into ionisable cationic lipids ideal to be seen as ‘self’ by the body – moving by stealth into the cells by endocytosis. The cellular uptake of LNP mainly relies on the endocytic pathway. More in detail, it has been shown that specific serum proteins adsorbed on the surface of LNPs upon intravenous injection can drive the cell internalisation5. For an effective nucleic acid delivery, a large portion of functional molecules should escape the endosomal compartment before the degradation cascade begins. Ionizable lipids, which are capable of modulating their charge depending on the environmental pH, are recognised as a key component of LNPs for the endosomal escape6.

NanoAssemblr for vaccine development

Lipid nanoparticles (LNPs) are the most clinically advanced non-viral gene delivery system. Lipid nanoparticles safely and effectively deliver nucleic acids, overcoming a major barrier preventing the development and use of genetic medicines and vaccines. The Precision Nanosystems platform facilitates the formulation of LNP vaccines on a research scale through to full GMP manufacture. The nanoparticles produced work better than other methods of manufacture such as T-Tube mixing, in a study siRNA-LNPs manufactured by three NanoAssemblr® instruments exhibited encapsulation efficiencies of higher than 95%, Factor VII siRNA knockdown efficacy was maintained for nanoparticles produced on the NanoAssemblr® Benchtop, Blaze, and GMP System. 

Particles generated by the NanoAssemblr® platform are more uniform than those made by conventional T-Tube mixing methods. T-tube generated lipid nanoparticles exhibit a multilamellar morphology vs the homogeneous-core structure for the NanoAssemblr® generated lipid nanoparticles. Serum Factor VII siRNA knockdown efficacy was higher for NanoAssemblr® siRNA lipid nanoparticles compared to conventional T-tube lipid nanoparticles, 72 hours following systemic administration7.

Your nano solution

ATA Scientific are just one call away to demonstrate the rapid, effortless lipid nanoparticle production, and optimisation. We can tailor the system choice to suit your stage of development.

ATA Scientific have a portfolio of products that complement the NanoAssemblr range such as the Malvern Panalytical Zetasizer, ITC and Nanosight plus microscopy and cell counting solutions. Contact us today for an illuminating discussion. 

REFERENCES:

1)      Self-Amplified RNA Vaccine Against COVID-19.   https://youtu.be/tVh1s06H_nw

2)      Liposomes vs. Lipid Nanoparticles: Which Is Best for Drug Delivery? https://www.biochempeg.com/article/122.html Accessed 24 Sept 2021

3)      Meng, N., Grimm, D. Membrane-destabilizing ionizable phospholipids: Novel components for organ-selective mRNA delivery and CRISPR–Cas gene editing. Sig Transduct Target Ther 6, 206 (2021). https://doi.org/10.1038/s41392-021-00642-z

4)      Cullis, P. R., and Hope, M. J. (2017). Lipid nanoparticle systems for enabling gene therapies. Mol. Ther. 25, 1467–1475. doi: 10.1016/j.ymthe.2017.03.013

5)      Cheng Q, Wei T, Farbiak L, Johnson LT, Dilliard SA, Siegwart DJ. Selective organ targeting (SORT) nanoparticles for tissue-specific mRNA delivery and CRISPR-Cas gene editing. Nat Nanotechnol. 2020;15:313-320. https://doi.org/10.1038/s41565-020-0669-6.

6)      Schlich, M, Palomba, R, Costabile, G, et al. Cytosolic delivery of nucleic acids: The case of ionizable lipid nanoparticles. Bioeng Transl Med. 2021; 6:e10213. https://doi.org/10.1002/btm2.102137)      https://www.precisionnanosystems.com/workflows/formulations/lipid-nanoparticles

Do you need Size Exclusion chromatography (SEC) & Gel Permeation Chromatography (GPC) in Additive Manufacturing?

Gel Permeation Chromatography (GPC) is a widely used technique that separates macromolecules such as proteins and polymers based on size. As researchers and additive manufacturers continue to push the boundaries of performance and demand more complex material properties, the evolution of the technique has enabled a better understanding of the key indicators that lead to a high performance printed product.

What is Additive Manufacturing?

Additive manufacturing (AM) is a technology that uses three-dimensional computer models to print parts by building the component layer by layer. For metallic products, the traditional method used for manufacturing metal parts is subtractive, whereby milling machines form the part from a solid block of metal.

In the AM process, high-precision electron beams or lasers move at high speeds to selectively melt layer upon layer of metal, tens of microns thick. Highly complex parts are rapidly formed with novel functionality using less material than other methods. Multiple fields use AM, including construction, prototyping, biomechanical, and others, to produce prostheses individually adapted to humans and animals.

Types of Additive Manufacturing Processes

Powder Bed Fusion (PBF), like Selective Laser Sintering (SLS), uses a laser to selectively fuse thin layers of powder particles (usually metal, polymer, or ceramic). Thermoplastic polymers such as nylon are well suited for use in PBF as they are processed reliably due to their semi-crystalline nature, which provides a distinct melting point. The wide temperature working window between melting (during heating) and subsequent crystallisation (via cooling) makes nylon the choice polymer.

Stereolithography is one of the first additive manufacturing or 3D printing technologies developed. Initially, parts manufacturers used the process to create polymeric prototypes, but now it is also used in final-part production. In stereolithography, a large tank or vat of photopolymer resin (composed of oligomers, monomers, and photoinitiators) undergoes cross-linking upon exposure to Ultraviolet (UV) or Visible (Vis) light. A support platform moves the cured object upward or downward layer by layer to form the final product. The tensile stiffness and elasticity of the solid product are essential for additive manufacturers to analyse and ensure consistent quality. Controlling the oligomers’ molecular weight distribution, structure, and proportion of photoinitiator used achieves optimisation. These properties also affect the photopolymer formulation’s rheology and viscosity.

Fused Deposition Modelling (FDM) uses a material-extrusion technique where a thermoplastic filament is drawn through a nozzle, heated to its melting (or glass transition) point, and then deposited layer by layer to cool and harden, repeating the process until the 3D structure is complete. Assessing the melt characteristics and determining the structure of the polymers (i.e. molecular weight distribution, molecular density, and degree of branching) is critical in developing novel feedstock material with unique mechanical properties that are also printable.

Material/binder jetting uses a liquid binding agent to join the metal, ceramic, or polymer powder particles rather than melting or fusing with a laser or electron beam used in PBF. This process forms a green part removed from the printer with solidification via a secondary de-binding or sintering step. Accurate determination of molecular weight and structure of polymeric powders and binders is required to optimise final component properties.

What are the main challenges of Additive Manufacturing techniques?

The leading challenge additive manufacturers face relates to the quality of the final product made, which is highly dependent on understanding the quality of the feed material. Selecting high-quality metal or polymer powders highly spherical and free from satellites or deformed/ agglomerated particles can reduce variation and prevent cracking, distortion, weakness, and poor surface finishes of final products. However, high-quality materials are relatively expensive, which contribute to high build costs. Although the ability to recycle the unused material can save on costs, reusing the polymer powder can age it and cause unfavourable structural changes. By accurately characterising the molecular properties, such as the molecular weight, molecular size, and size distribution of the bulk polymer and polymeric structure (branching, crystallinity), manufacturers can optimise specific AM processes and prevent processability issues that impact the quality of the final component.

Why is particle size and structure important for 3D printing?

Understanding key properties such as particle shape, structure, particle size, and particle size distribution in the powders is essential. These properties can impact the powder’s packing density, flowability, and compressibility. Each characteristic ensures uniformity and must be optimised to create a product free from defects such as pores, cracks, inclusions, residual stresses, and unwanted surface roughness. Irregularly shaped particles tend to increase interparticle friction and decrease flowability while the preferred smoother, more regular-shaped particles flow more easily. As particle size decreases, the forces of attraction between particles increases. Optimisation of flowability and packing density occurs as finer particles increase density by filling the gaps left by larger ones. Therefore, measuring particle shape and size distribution impacts the powder material properties; it is vital for ensuring the feed material is suitable for an application.

One of the primary characterisation techniques for analysing polymers used in additive manufacturing is size exclusion/gel permeation chromatography (SEC/GPC). This technique enables a better understanding of macromolecular characteristics, such as particle size and structure of the feed material, the effects on powder reusability, and ultimately, the final product’s quality.

What is Size Exclusion Chromatography (SEC) or Gel Permeation Chromatography (GPC)?

SEC or GPC is a liquid chromatography technique that separates polymers according to their size (hydrodynamic volume) to measure molecular size and structure. GPC or SEC involves separating the sample as it passes through a porous chromatography column. Larger molecules unable to penetrate the pores are excluded and thus travel through the column faster than smaller molecules, allowing separation based on size.

GPC or SEC can be used to measure molecular weight (MW), molecular weight distribution, intrinsic viscosity, and the hydrodynamic size of macromolecules. The inherent viscosity measurements combined with the molecular weight identify structural differences between samples.

What is absolute Molecular Weight (MW)?

The MW of a polymer is the sum of the atomic weights of the individual atoms that comprise a molecule. It indicates the average length of the bulk resin’s polymer chains. There are different kinds of molecular weight: Number average molecular weight (Mn), weight average (Mw), and z-average molecular weight (Mz). Various techniques can measure each MW moment (Mn, Mw, Mz). For instance, osmotic pressure calculates the number of molecules present and provides average molecular weight regardless of their shape or polymers’ size. In comparison, SEC or GPC in a single measurement provides complete and accurate MW distribution characterisation while also providing structural information. GPC or SEC determines the polymer or biopolymers’ absolute molecular weight and branching degree by measuring light scattering at various angles as a concentration function.

The molecular weight (MW) and molecular size play a key role in determining the mechanical, bulk, and solution properties, determining how the polymer material will behave during processing as a final product. For AM, selecting the correct polymer MW is a balance between printing ease and final-product performance. Low MW polymers exhibit low viscosity and offer better flow properties with fewer stresses. As MW and cross-links increase, so do polymer strength, brittleness, melt temperature, and viscosity, but solubility decreases.

Why use a multi-detection SEC or GPC system?

A conventional GPC or SEC system setup usually consists of only an isocratic pump and a detector, either Refractive Index (RI) or Ultraviolet (UV). This setup provides only a concentration profile of the size-separated sample and relative MW. The calibration standards contain a polymer mixture of known MW correlated against the RI traces in the calibration process. However, this calibration plot is accurate only if the standards’ intrinsic viscosity is identical to that of the sample. Only polymers of the same MW with equivalent intrinsic viscosity will elute at the same rate, a significant limitation when gathering precise data for the detailed comparison of relatively similar polymers when the calibration standards are sub-optimal for the polymers of interest.

In contrast, the Malvern Omnisec GPC/SEC system employs the universal calibration technique to address this limitation. It uses highly informative multiple detection regimes to directly and accurately measure MW. This process includes a concentration detector (RI or UV-Vis), a multi-angle light-scattering detector (RALS/LALS/MALS), plus a self-balancing viscometer that enables the measurement of structural features such as branching or conformation. Multiple detectors provide additional information about a sample when simultaneously evaluating a single injection. This information includes Absolute MW and MW moments; Intrinsic Viscosity (IV), hydrodynamic radius (Rh), the radius of gyration (Rg), dn/dc calculated value, sample concentration, and recovery, to name a few. The Rh of a sample is the radius of a sphere with the same mass and density of the sample based upon molecular weight and intrinsic viscosity. Rg represents the root mean square distance of the molecule’s components from the molecule’s mass centre. Both provide valuable molecular size information. Plotting the MW measured directly from the light scattering detector against the IV measured from the viscometer detector produces a Mark-Houwink plot to illustrate the relationship between molecular structure and molecular weight.

The pioneering work from Viscotek, a market leader in GPC, led to the Omnisec system from Malvern. For the last two decades, the system has continued to evolve. Today, it is the most advanced GPC system for measuring absolute molecular weight, molecular size, intrinsic viscosity, branching, and other structural parameters.

Looking for the perfect analytics instrument for YOUR next big discovery?

Speak with the ATA Scientific team today to get expert advice on the right instruments for your research

Request free consultation

What is the power of multi-detection GPC/SEC?

In a complete GPC/SEC system, OMNISEC integrates the separations unit and all four detectors with inter-detector tubing temperature control.

RI detector: This is the most common detector for any GPC or SEC system. RI detectors are referred to as concentration detectors because the difference in refractive index between the sample solution and the solvent is proportional to the sample’s concentration. It provides a dn/dc value, the refractive index increment, which is essential because it is the link that translates the raw RI signal to sample concentration. Knowing the concentration allows the calculation of all molecular parameters, including absolute molecular weight and IV.

UV-VIS PDA detector: UV-VIS detectors are also concentration detectors but require the sample to have a chromophore and absorb light at a detectable wavelength between 190 – 900nm.

Capillary differential viscometer: First invented by Max Hanley in 1984, this unique viscometer measures the changing solution viscosity to calculate the sample’s intrinsic viscosity (structure). The viscometer detector uses four capillaries, a delay column, and two transducers (DP and IP). It has a user-replaceable self-balancing bridge that helps to reduce downtime and maintenance requirements. The viscometer works by directly measuring the specific viscosity by subtracting the solvent’s contribution in a balanced capillary bridge. When used with a concentration detector, it will measure the IV distribution of any polymer.

Static Light Scattering (SLS) detectors: Light scattering occurs when a photon from an incident beam is absorbed by a macromolecule and re-emitted in all directions. The intensity of light scattering measures MW and Rg described by the Rayleigh theory. Small molecules less than 10 – 15nm in radius will scatter light evenly in all directions and are known as isotropic scatterers. Large molecules with an Rg of more than 15nm (radius) and high MW are anisotropic scatters. They have multiple scattering points and tend to scatter more light in different directions with different intensities. A Debye plot models this angular dependence of samples scattering and is used to determine the MW andRg at every data slice within the chromatogram using multi-angle light scattering.

There are four types of SLS instruments:

  • Low Angle (LALS) measures the intensity of light scattering very close to the Zimm plot’s axis or very close to 0°. The calculated MW will be very close to the actual MW therefore ideal for anisotropic scatterers such as large polymers.
  • Right Angle (RALS) measures the intensity of light scattering at 90° and with sample concentration provides the measurement of MW for molecules, <15nm (radius) in size, ideal for proteins. Low molecular weight polymers are isotropic scatterers. The resulting partial Zimm plot is flat with a zero slope; therefore, it is unsuitable for these smaller materials. Isotropic scatterers, smaller than 10 – 15nm in radius, will scatter light evenly in all directions, enabling only the MW measurement.
  • For large polymers with an Rg >15nm that exhibit angular dependency in the light they scatter, a Multi-Angle (MALS) detector makes it possible to determine molecular size Rg in addition to MW. A conformation plot (plot of Rg against MW) allows the measurement of any structural differences between the samples.

The LS detectors’ high sensitivity enables molecular weights measurements as low as 200 Da or injection masses as low as 100 ng of material. This sensitivity measures low molecular weight samples, novel polymers with low dn/dc, or tiny amounts of precious samples. An RI detector combined with light scattering and viscometer detectors provides the sample’s exact concentration at each data slice using the sample’s dn/dc value to calculate the absolute molecular weight intrinsic viscosity.

By housing all the detectors together in a single compartment, the distances between them can be kept to an absolute minimum, reducing the level of band-broadening and tailing. Additionally, the use of a single temperature-controlled compartment for detectors and all connecting tubing means the temperature can be elevated for polymer applications to reduce the viscosity of certain mobile phases such as DMSO (dimethyl sulphoxide). Combining all of these factors makes Malvern Omnisec Reveal the most-advanced multi-detection platform for analysing natural and synthetic polymers.

Whether you’re looking for the Malvern Omnisec Reveal or another scientific instrument to assist your addictive manufacturing, our team has the expertise to match your research scope to the right analytical instruments. Contact us for more information.

References

How to Efficiently Transport Live Cell Cultures Without Freezing Them

Ross Harrison first attempted cell culture in the early 1900s. Fast forward 100 years and 3D in-vitro cell culture is routine, and an organ on a chip is a reality. Cells are fundamental to all bodily functions; they encompass a multitude of components with interactions that depend on the cell’s purpose. The awe-inspiring knowledge is they originate from stem cells that differentiate to become a particular cell — i.e., nose, ear, blood, etc. Billions of replications and the body generally works for life spans sometimes eclipsing a century. Years ago, cellular research had to be content with cells on a slide, stained to expose their secrets, and labs had to start from scratch to emulate the procedure.

How are cells transported today?

Today, researchers utilise specific cell lines created by facilities around the globe. Whether novel or challenging to isolate, these specimens are often sought after and must be transported from lab to lab. Shipping is required to accelerate research or to ensure treatment of the precise cell. This need has created a quandary — live cells are photosensitive; they also require 37oC and a 5% CO2 atmosphere. This atmosphere is hard to achieve. Cryopreservation has been the best way to meet such requirements.

Looking for the perfect analytics instrument for YOUR next big discovery?

Speak with the ATA Scientific team today to get expert advice on the right instruments for your research

Request free consultation

What happens when you freeze cells?

Cryopreservation abates the disadvantages of freezing cells. Most living organisms die when frozen due to cryoinjury, the formation of intracellular crystals when cells are rapidly frozen. Slowing this process unearths further challenges such as extracellular ice formation that cause osmotic stress and mechanical damage. Epigenetic modifications may result from incomplete cryopreservation amongst other batch-to-batch variations. Using an antifreeze such as dimethyl sulfoxide (DMSO), a cryoprotective agent, gives rise to other issues – particularly effects on natural cell behaviour.

An entire industry has emerged in culture media with a focus on cell growth, 3D structures such as support for spheroids and organoids, regenerative medicines, cell expansion, and bioprinting. Nanocellulose structures provide exceptional growing conditions by forming a hydrogel that is easily enzymatically cleared to release the cells for clinical use.

As research begins to reject processes that fail to emulate a natural environment, cryopreservation will take a back seat. The procedures and techniques that fail the natural behaviour test reveal that cells being studied are in an altered state, encouraging further scientific scrutiny.

Be prepared to ask yourself questions such as:

  • Are you exposed to foreign genetic material in your media?
  • Have you cryopreserved? 
  • Have you used fluorescent tags in the treatment of these cells?

Variations may be seen as cells are exposed to foreign genetic material or when fluorescently tagged. Whether occurring locally or internationally, the need for a reliable method to transport fragile and valuable cells has led to the development of a portable CO2 incubator.

Can live cells be transported without freezing?

To put it simply, the answer is If cells are simply frozen to ship, their cellular processes become compromised. If a natural route is chosen – such as keeping the cells warm with lots of media to sustain them for their journey – it’s likely they will die due to a lack of CO2 and a disruption to their constant temperature, highly probably for long distance trips. Consider a 10-minute walk across a University campus from the animal house to the lab in an insulated container. On a cold day, mouse embryos may perish. Such is their dependence on precise thermal regulation. Sympathetic to this outcome, the term ‘Live Cell Shipping’ could be categorised as a misnomer.

What are the alternatives to freezing?

You can lob your cells into an esky and cross your fingers, hoping for the best. Some cargo just doesn’t survive a few minutes out of 37oC and 5% CO2. If the journey is long, there is little hope the cells will arrive at their destination alive. The best-known method to transport not only fragile cells but the whole gamut of living tissue is the Cellbox.

The Cellbox is a live-cell shipper, designed by the Fraunhöfer Institute from the ground up to solve the cell-transport conundrum. It is the first portable CO2 incubator intended to transport cells by road, rail, or air. The Cellbox is a significant game-changer. Prepared and packed under UN 3373, you need not defrost biological material; they are ready to go as soon as they arrive. There are no toxic substances added and no freeze-thaw loss, which saves cells and time.

Choosing Cellbox with ATA Scientific

A well considered live cell research plan has accounted for how the cell lines will be transported to the dedicated lab and facilities thereafter

The Cellbox incubator is a technological advancement. With it, you can track the events of the entire journey, then download to your smartphone to correlate the conditions to the cellular activity. This insight is instrumental if you are shipping clinical samples such as blood from the blood bank, embryos, cord blood, cell cultures, and tissue engineering.

To evaluate the Cellbox’s benefits to your facility, call or email ATA Scientific today. This one call may transform your capacity to align your research methods with more natural behaviour. Speak to ATA Scientific for a demo or trial of the Cellbox in your facility.

Working with X-CLARITY – Key Considerations for Tissue Clearing

I can see clearly now the fat has gone…

Sounds like the lyrics to a popular song, but it is a call to understand just what tissue clearing is and to negotiate the minefield of promises and misinformation around this topic. 

Many articles explain interactions of light, refractive indices, scattering, constructive and destructive waves, and describing the core physics behind the observed phenomena. One such paper also discusses the history of tissue clearing until the very early days of the CLARITY technique developed by the Deisseroth lab at Stanford University. CLARITY is a method used to resolve issues encountered when clearing tissues of fat using solvents.

Fundamentally, to see deep into intact tissue for microscopy capable of imaging beyond 1mm, say 5mm, required a new technique—a method that could avoid the inherent shrinkage and short fluorescent lifetimes of a solvent-based approach. 

Opaqueness results from the multiple scattering of light; when a wave interacts a particle and propagates light in all directions, the tissue (a collection of compacted cells), considers this a multitude of light sources. Some methods utilise long wavelengths in an attempt to avoid this type of short wavelength scattering.

How does this advantage work?  

Try to test scattering by shining a torch through your hand in a darkened room. Your hand will appear bright red. Opacity is very apparent in bone and teeth, densely packed cells carrying calcium that scatters light. It is virtually impossible to shine a light through these tight areas. Dense samples must undergo decalcification before performing clearing steps. 

Solvent-based techniques have an ever-increasing list of novel solvents all attempting to make a silk purse from a sow’s ear. All are fundamentally flawed as the restrictive selection of antibodies and short fluorescent lifetimes is limiting. 

The solvents rarely are microscope-friendly, and if they are, you have invested heavily into objectives that claim to be solvent-resistant. These solvents cannot clear beyond around 1mm. These shortcomings don’t seem to deter the vigour of their desire to sell these products. If you wish to image through a whole mouse brain, for example, the most effective method is X-CLARITY, which can clear through the tissue in just six hours! Plus, it is aqueous, allowing a vast array of antibodies. Larger tissue sections are desirable to understand interactions across a complete organ, or at times the whole organism while avoiding the losses associated with sectioning.

Antibodies are used for molecular specificity, to bind to a specific protein, effectively singling it out for fantastic fluorescent images. They pop out from the background illumination.  

DeepLabel™ Antibody Staining Kit

Whether the clearing method is solvent or aqueous based, many techniques affect the ability for a molecular probe to infiltrate the tissue. Often, this is a laboriously slow diffusion of the probe into a thin slither of tissue. Staining thick tissues is notoriously fraught with inconsistencies, lack of uniformity from non-specific binding to the outer edges.

The DeepLabel™ Antibody Staining Kit is a collection of non-toxic reagents optimised to enable macromolecular probes to penetrate the thick tissue. An analogy is these reagents create the highway for the labels to travel on, as they  search for specific binding sites. The added benefit is the Deeplabel Antibody Staining kit makes this process faster, more efficient by using far less antibodies. The secret to such vibrant fluorescent images is the 2.6 X greater signal to background ratio. Thanks to homogenous staining that permits subcellular resolution, DeepLabel™ is compatible with all antibodies and cleared tissues.

Refractive Index Matching

Once labelled, the tissue is placed in a Refractive Index Matching Media (RIMM) to enhance the clarity for microscopy, making some tissues almost vanish. In the context of a microscope, this process eliminates the scattering of illumination and accomplishes the mission – where the fluorescence ‘pops’ out to produce vibrant, crisp images. Logos Biosystems supply this ~1.46 RI media which enhances confocal imaging of tissues, allowing for reduced laser power, low photobleaching, and preservation of fluorescent signals for up to 2 weeks.

While clearing methods allow light to penetrate samples for imaging, the method used can affect tissue permeability to molecular probes. Conventional labelling protocols involve the slow diffusion of probes into thin sample sections, which translates to a time-consuming impracticality when applied to thicker samples. The slow progression of diffusion can also lead to uneven staining, with higher, nonspecific binding on the outside of a thicker sample. 

Looking for the perfect analytics instrument for YOUR next big discovery?

Speak with the ATA Scientific team today to get expert advice on the right instruments for your research

Request free consultation

The X-CLARITY system 

The X-CLARITY™ systems and reagents standardize, simplify, and accelerate each step of the tissue clearing process. With the CLARITY method, preserved tissues are embedded in a hydrogel matrix and lipids are actively extracted through electrophoresis to create a stable and optically transparent tissue-hydrogel hybrid that is chemically accessible for multiple rounds of antibody labelling and imaging. Native cytoarchitecture remains intact and even endogenous fluorescence proteins are preserved for robust fluorescence imaging downstream.

X-CLARITY system has three main components

  1. A polymeriser activates the temperature- triggered hydrogel, which forms the structure of the tissue after fat removal. The result is a material similar to a western blot gel. It is harmless in this form and maintains shape during the electrophoresis.
  2. The ETC Chamber is where the Electrophoretic Tissue Clearing takes place. Placing the sample between two large platinum electrode plates maximises the surface area and ensures even clearings while pumping an aqueous detergent through the chamber. 
  3. The Control Tower contains pumps and temperature regulators. As detergent is pumped through the chamber, a temperature probe provides feedback. Cooling the tissue achieves the setpoint. It is essential to maintain control over temperature and ensure it does not exceed 37°C to protect the protein.

Logos Biosystems have made clearing a turn-key operation of four steps: polymerise, clear, label, and RIMM. Applications for the X-CLARITY method extend beyond brains to clear virtually any organ, even organoids. If you need assistance in determining if your application is possible, contact us.
The Logos Biosystems X-CLARITY system’s unique design accelerates the removal of lipids from tissues in a highly efficient manner. X-CLARITY is an all-in-one system with ready-to-use reagents for simple, rapid, and efficient tissue clearing. To learn more about the X-CLARITY system, speak to ATA Scientific today.  

17 SCIENCE BLOGS EVERYONE SHOULD FOLLOW

Thanks to the Internet, science has never been so accessible. Entire scientific communities now connect through social media, and science blogs have prospered and grown into rich platforms of discourse. On the Internet, experts and amateurs alike can come together to talk about the topics that interest them, and in doing so have created great educational spaces for anyone who wants to learn and discuss science.

If you’re looking for some up-to-date scientific information, or just want to browse, have a look at this list we’ve compiled of the 17 best blogs currently posting about science.

1. IFL SCIENCE

Established by Elise Andrews in 2012, I F****** Love Science, or IFL Science, is “dedicated to bringing the amazing world of science straight to your newsfeed in an amusing and accessible way.” With a reputation for being one of the most important and entertaining scientific blogs currently out there, Andrews has been able to create a platform that is equal parts informative and fun. Featuring insights into a mix of scientific disciplines, the best part about this blog is science lovers of all ages and backgrounds can come together and share their love for science. The Facebook page posting daily links to IFL Science has over 25 million likes.

Our pick: The Mind-Blowing Story Of A Man Who Can’t See Numbers

2. CSIRO

The Commonwealth Scientific and Industrial Research Organisation (CSIRO) is Australia’s national science agency. Established in 1916, the CSIRO has invented everything from modern-day WiFi, Aerogard and even extended-wear contact lenses. With such an innovative impact on both a national and global scale, it’s no surprise its blog is one of the most interesting scientific reads on the Internet. Covering a vast number of topics including farming, ocean studies, manufacturing and health, the CSIRO is a fantastic insight into some of the most fascinating scientific breakthroughs by Australian and international scientists.

Our pick: AI relieves data drought for farmers

3. DR KARL KRUSZELNICKI

Featuring Kruszelnicki’s trademark humorous yet informative approach to science, this blog delves into some of the most complicated and frequently asked (but not so frequently answered in an accessible way) scientific questions. From the beautiful act of vomiting to the overwhelming-grand ‘how many cells in a person?’, Kruszelnicki seeks to entertain and educate in a laidback and educational manner that young and old Australians alike will love. For more of Dr Karl, you can also check out his Twitter, and tweet him any of your burning scientific questions.

Our pick: Bacteria of champions (A look at how a gut microbiome can give a marathon runner an athletic advantage)

4. NAUTILUS

Nautilus “combines the science, culture and philosophy into a single story told by the world’s leading thinkers and writers.” Originally a magazine and online website, the Nautilus blog is an offshoot that provides daily musings, reflecting on our connection to science in our day-to-day lives. Science with a modern twist, this blog will appeal to those who are interested in questioning the humanity of science, and the scientists who drive innovation around the world. Don’t be mistaken in thinking this blog foregoes science in the name of philosophical musings – each of its pieces are thoroughly researched and very accessible for those who enjoy a little bit of light science reading.

Our pick: How Ancient Light Reveals the Universe’s Contents

5. PLOS

The Public Library of Science is a non-profit organisation that provides a collection of scientific journals and literature open to the public. The PLOS blog network features content from PLOS staff and editors, as well as independent sources including science journalists and researchers. Blogs featured on the PLOS platform cover a wide range of topics like Biology Life Sciences, Medicine and Health, and Research Analysis and Scientific Policy. Catered towards the scientific community and those who have a thorough understanding of scientific concepts, this blog isn’t for everyone but is a great tool if you are an avid science enthusiast looking to find free resources and information on many topics.

Our pick: On the Role of the Non-Expert in a Crisis

6. IMPROBABLE RESEARCH

Improbable Research is all about making people laugh, and then think. A collection of real research described as “may be good or bad, important or trivial, valuable or worthless,” it’s an online manifestation of the popular Annals of Improbable Research, a publication mainly known for creating the Ig Nobel Awards, a parody of the Nobel Awards. A seriously fun look into the slightly crazy and often bewildering world of science, this blog is sure to captivate your curiosity and get you thinking about the value of scientific research and innovation.

Our pick: ‘How aesthetically pleasing is your country’s diffraction pattern?

7. LAELAPS

A blog presented by Scientific American, LAELAPS is written by critically-acclaimed scientific writer Brian Switek. A blog about evolution, extinction, and survival, LAELAP’s explores natural history with insights from fields such as anthropology, zoology, archaeology and palaeontology. If biological science is a keen area of interest, this blog has some great pieces of scientific literacy that bridge the gaps between complex concepts, and accessible and captivating stories.

Our pick: The Secret of the Crocodile “Death Roll

8. ANNALS OF BOTANY BLOG

News and views on plant science and ecology, the blog is an offshoot of Annals of Botany, an online scientific, peer-reviewed journal that releases research once a month. This blog really gets into the nitty-gritty of plants, so its content is more suited to those who have a deep understanding and knowledge of botanical science. However, if you want to delve right into the world of botany, it does post some content that is accessible for all science lovers.

Our pick: How do mangrove forests recover from cyclones

Looking for the perfect analytics instrument for YOUR next big discovery?

Speak with the ATA Scientific team today to get expert advice on the right instruments for your research

Request free consultation

9. NewScientist

NewScientist would have to be the most recognisable name in scientific publications to the mainstream. The magazine has been readily available on newsstands for years, and the publication’s success can be attributed to the way it features stories of real-world interest and relevance in a very accessible way, without ever ‘talking down’ to the audience.

Note, NewScientist is protected by a paywall, so if you want to access all of its stories, you’ll need to make a small investment into quality science journalism.

Our pick: Artificial whisky taster has the palate of a connoisseur

10. NEUROLOGICAL

Authored by academic clinical neurologist Steven Novella, MD, Neurologica covers neuroscience, scientific scepticism, philosophy, and the intersection of science and media. This blog delivers great insight into the brain and scientific news, issues and discoveries surrounding this topic, and is equal parts high-brow and approachable. Novella’s knowledge and expertise enable him to make some great thoughts, insights and opinions on a variety of subjects, from GMO’s to clickbait. A platform which broaches hot-topic issues found in mainstream media in a scientifically critical way, this really is a truly educational and eye-opening resource.

Our pick: Localizing Executive Function (Addressing the complex challenges of identifying the specific part of your brain where a specific activity takes place)

11. The Sciences

Run by The Scientific American, The Sciences is a must-read, accessible take on science topics as far ranging as Mind, Health, Tech and Sustainability. The dedicated blog section, which is separate from the main site’s repurposing of its magazine content, features fascinating little tidbits, such as ‘how to hop on an Asteroid,’ which might not be information relevant to our day-to-day lives, but certainly inspires the imagination as only science can.

Our pick: A Poetic, Mind-Bending Tour of the Fungal World

12. MORTUI VIVOS DOCENT (MRS_ANGEMI)

WARNING: This blog is not for the faint of heart as it contains graphic material some may find disturbing (and others might find captivating). Mortui Vivos Docent is an Instagram blog run by forensic pathologist Nicole Angemi. Featuring graphic images of autopsies, Angemi captions each image with a scientific insight into the gruesome world of human biology. Her expertise lies in identifying infections and diseases in the deceased, and her lengthy captions break down some very complex medical concepts into easy-to-understand snippets. She also frequently asks followers to guess the disease from an autopsy, and will later post a detailed answer of her analysis.

13. CODING HORROR

Created by software developer and entrepreneur Jeff Atwood, Coding Horror is one of the most well-known blogs in the computer programming community. Atwood states in his About Me that, “in the art of software development, studying code isn’t enough; you have to study the people behind the software, too.” With this in mind, Coding Horror’s undertakes an in-depth analysis of the minutiae of coding, and also of the people who created it. This two-pronged approach makes for a refreshing take on an otherwise technical topic. Perfect for both amateur and pro coder’s, this blog is an education in how to ‘geek out’, without losing your audience’s attention.

Our pick: Electric Geek Transportation Systems (A closer look into electric cars)

14. CLIMATE CONSENSUS – THE 97%

Authored by John Abraham, a professor of thermal sciences, and environmental scientist Dana Nuccitell, this blog concentrates on climate and environmental science and discusses the public scepticism surrounding popular environmental topics. It is an interesting insight into the power of ‘expert consensus’. A must-read for those with a keen passion for climate science, Climate Consensus’ articles is both accessible and thought-provoking.

Our pick: Oceans are as hot as humans have known them and we’re to blame

15. WireD Science

Wired is a general-interest online publication, covering technology issues for the most part. However, as the technology industry is driven by science, most people with an interest in technology are interested in science beyond the tech, and Wired’s science section is impressively robust. A lot of Wired’s science coverage has to do with technology issues (Elon Musk and Donald Trump feature prominently), but there is also plenty of fun, quirky stories.

Our pick: Why a Tennis Ball Goes Flying When Bounced on a Basketball

16. It’s okay to be smart

You would think that the title of this blog goes without saying. But in this day and age, people do need the occasional reminder that intelligence is something to be admired. It’s Okay To Be Smart is the blog of a Ph.D. biologist, Joe Hanson, and is really focused on promoting the videos that he produces for various outlets. Each video is highly entertaining to watch and comes with all the production values that you’d expect to find of someone who is often on TV. It’s the perfect blog to keep an eye on and link to friends and family because often topics of interest to just about everyone will show up.

Our video pick: Why Doesn’t the Atmosphere Crush Us?

17. Space.com

Very few fields of science inspire the masses quite like space does. The great frontier, only ever experienced by many of us through the lens of science fiction, is the very definition of exotic and wondrous, and in scientific terms, it’s also a playground for startling discoveries and revelations about the nature of everything in the universe (including us). Space.com is a wonderful resource for anyone interested in no-frills, clean, and accessible reporting about space.

Our pick: Incredible time-lapse video shows 10 years of the sun’s history in 6 minutes

Subscribing to your favourite science blogs

A quick search in Google and you can generally find whatever information you need. But sometimes the mass and diversity of material on the Internet can be overwhelming. Blogs are a valuable resource that can give analytical insights into the people, inventions and discoveries driving scientific innovation. Macro or micro, the blogs in this list engage in discussions and topics that will continue to evolve and change throughout history. Up-to-date and topical science blogs are the future for scientific research, education and outreach, a future which is being built by the blogs mentioned above.

Are you interested in or specifically searching for a type of analytical instruments? Be sure to contact the team at ATA Scientific to have your questions answered here.