LS Instruments

Get in touch

Customer Support Center

If you are encountering any troubles with our products, please carefully read the FAQ section below. If this is not of help, please create a customer support ticket following the link below. Our experts will get back in touch with you as soon as possible.

It is really important to track issues in the ticketing system in order to maintain the highest quality standards and empower us to improve our products constantly.

FAQ

If you have any technical questions about the product or how to operate it, please carefully read the FAQ below.

LS Spectrometer

How to check the alignment of the (3D) LS Spectrometer?

In order to determine the status of alignment of your instrument, test measurements using standard samples in 10mm cylindrical cells are used. These measurements consist of :

• a toluene sample,
• a single scattering dispersion of small particles.

The instrument is considered well aligned when:

1. The scattering intensity of the toluene is constant within 5% over the accessible angles (typically 30°-140°).
2. The intercept measured with a single scattering (dilute) sample in Modulated 3D is on average higher than 0.65 (0.17 in 3D Cross, 0.9 in Pseudo-Cross) and has no results lower than 0.6 (0.16 in 3D Cross, 0.8 in Pseudo-Cross).  

Preparation and measurement protocol:
The toluene needs to be filtered several times to avoid dust and other contaminants. The best is to filter 2 to 3 mL of toluene directly in a clean scattering cell, then take it out using a syringe and filter it again directly in the cell. Repeat these last steps a few times and finally fill the scattering cell with 1 mL of toluene. A way to verify if the toluene sample is clean is to measure its correlation function at low angle (20° for example). If you observe a decay, it means that there are particles in your toluene sample and you need to filter it again.

For the single scattering dispersion of small particles, we recommend using Ludox TM-50 silica particles (Sigma-Aldrich, R = 20 nm). Dilute the stock to obtain a single scattering sample (typically one drop of the stock in 4 mL of MilliQ water) and fill a clean scattering cell with 1 mL of the suspension. These small particles have the advantage to produce a relatively flat intensity versus angle and to yield a fast correlation function decay. However, one can use any type of suspensions for this test, provided that there is no multiple scattering and that the sample is ergodic (showing a full correlation function decay to zero). While filtering is not necessary, sonication of the sample can help dispersing aggregates present in the stock.

We recommend using the following protocol for the measurement of the toluene sample:
1. set the laser intensity to its maximum
2. execute a script of this type: from 30°to 140° by step of 10°, 5 runs of 10 seconds per angle in Pseudo-Cross Correlation and 3D Cross-Correlation (as available on your instrument)

and for the single scattering dispersion of small particles:
4. set the target scattered intensity to 200 kHz
5. execute a script of this type: from 30°to 140° by step of 10°, 5 runs of 40s per angle in Pseudo-Cross Correlation, 3D Cross-Correlation and Modulated 3D Cross-Correlation (as available on your instrument)

If you choose to work with a single scattering sample of larger particles, you may simply need to increase the measurement time. In any case, the particles should have a diameter lower than 150 nm to yield a flat intercept versus angle.

Auto-correlation measurements are in general not recommended except for experienced users.  The problem is the corruption of correlation data at short lag times due to afterpulsing of the detectors.  Afterpulsing refers to the finite probability of generating a second induced detection signal after a photon-generated event, and occurs in all sensitive photon detectors currently used for light scattering measurements.

Pseudo-cross correlation is the process of splitting the measured scattered light into two paths (via a ‘Y’ in the fiber detection system) that are then detected by two distinct photon detectors.  Afterpulsing events are unique to each detector and thus cross-correlation of their signal leads to its suppression; only the true correlation function remains.  One incident beam must be manually block when using pseudocross (or auto-) correlation.  See this document for details on the geometry showing which beam should be blocked: 

Pseudo-cross correlation is distinct from 3D cross-correlation in that the former "does not" suppress multiple scattering.  3D cross-correlation supresses multiple scattering AND effects of detector afterpulsing

Pseudo-cross correlation should ideally be performed ‘in the scattering plane’ due to its more robust alignment, improved efficiency and improved polarization control in comparison to using a single beam of the 3D geometry.  This is of even greater importance when working at extreme temperatures and/or with very weakly scattering samples. This high performance pseudocross setup which is compatible with the 3D cross-correlation option is realized using the Pseudocross Ultimate option available from LS Instruments.

Why do we use decalin rather than toluene for index-matching?

Toluene has traditionally been used as an index-matching liquid in light scattering instrumentation due to its refractive index match with optical glass. The refractive index of toluene at 638 nm is approximately 1.50, while the refractive index of the 'vat' used in the LS Spectrometer is ~1.46 at 638 nm. The refractive index of typical optical glass used in the sample cells is ~1.50 at 638nm. Another useful characteristic of toluene is its low density (0.86 relative to water), thus helping potential contaminants to quickly settle.

However, decalin is used in the LS Spectrometer due to its lower volatility, lower toxicity and less acrid smell compared to toluene. The refractive index of decalin is approximately 1.47 at 638nm, and thus provides an essentially equivalent index match as does toluene, yet being far more practical to use. Decalin has a very slightly higher density (0.9 relative to water) and is mildly hygroscopic. In our opinion, the common use of toluene in light scattering instrumentation in many laboratories is only a result of it being done that way for a long time.

The index matching vat is not fully sealed from the ambient environment, thus it is normal that you will smell the solvent to some extent. The vat cannot be easily sealed due to required compatibility with the sample goniometer system which provides variable offset of the sample cuvette about the scattering volume axis. Thus it is likely the 3D LS Spectrometer may exhibit higher evaporation rates than with other light scattering instrumentation, and therefore a less volatile index-matching fluid like decalin can be advantageous.

The desire to run the 3D LS Spectrometer with toluene as the index matching liquid for closer comparison to other instrumentation using toluene is likely not so well founded. The optical properties of the two liquids are so close that it is unlikely one would ever see a difference when measuring the same sample on the same instrument. There will be far greater differences between two different instruments resulting from their designs than from the use of a different solvent. Static light scattering measurements must in any case be properly referenced in order to remove any instrumentation-dependent quantities, and thus any differences arising from the index matching fluid should be normalized out.

How to add index-matching fluid?

Due to evaporation, the index-matching fluid (mixed trans-cis decalin) will periodically need to be topped-up.  Typical refill intervals greatly depend on usage and working temperatures, but may be between several weeks and several months.  In any case, the level of the fluid must always be above the window visible from the side of the instrument.  You can safely add 20 mL of filtered (0.22 micron hydrophilic PTFE filter) mixed trans-cis decalin using a glass syringe when the decalin level is just at the top of the visible window.  Use care not to overfill the level as excess fluid will spill over the sides of the vat, potentially requiring careful cleaning of the outer surface (with optical wipes and a few drops of isopropanol).

Depending upon usage, the decalin should be completely replaced every 2 - 4  months to eliminate dust and other contamination.  Please see the next post How to clean the index matching vat and replace the fluid?  

How to clean the index matching vat and replace the fluid?

The best procedure is to first remove the old fluid using a glass syringe inserted into the opening for the sample holder, and then to add ~25 mL of new fluid to redisperse any remaining dust, remove this fluid, and repeat several times.  Finally, add 80 mL of filtered (0.22 micron hydrophilic PTFE filter) mixed trans-cis decalin using a glass syringe and wait 15 minutes before making measurements to allow for any bubbles to clear.

Index-matching vat removal and cleaning
Users should not remove the index-matching vat from the 3D LS Spectrometer.  Removing the vat requires partial dismantling of the instrument and may require a realignment. Major contamination (for example due to spill of a sample in the index-matching liquid) may however require vat removal and cleaning.  In such cases, please contact LS Instruments for further information.

Should measurements be performed in a completely dark room?

In the large majority of cases, no.

The full answer to this question depends on the sample, measurement type, and ambient environment.  The best starting point is to measure the scattering intensity in your laboratory (with typical lighting) with the laser beam blocked.  Dark or very dim rooms will lead to measurement of only the inherent detector dark counts which should be less than 250Hz.  Very bright rooms could yield as much as 15 kHz of ‘dark’ signal.

In general, you should keep the counts measured as above to less than 1% of your measured intensity.  This rule of thumb implies that greater care in minimizing ambient light entry into the detection system is important for weakly scattering samples. For extremely weakly scattering and/or highly absorbing samples, LS Instruments offers high power laser options.
High relative ambient light contributions will reduce the correlation function intercept and can compromise SLS measurements.  Note also that ambient light entry into the detection system often displays an angular dependence.

What is 'pseudo cross-correlation' and why do I need it?

Auto-correlation measurements are in general not recommended, except for experienced users. The problem is the corruption of correlation data at short lag times due to the after-pulsing effect of the detectors. After-pulsing refers to the finite probability of generating a second induced detection signal after a photon-generated event, and occurs in all sensitive photon detectors currently used for light scattering measurements.

Pseudo-cross correlation is the process of splitting the measured scattered light into two paths (via a ‘Y’ shaped fiber detection system) that are then detected by two distinct photon detectors.  After-pulsing events are unique to each detector and thus cross-correlation of their signal leads to its suppression; only the true correlation function remains. 
Pseudo-cross correlation is distinct from 3D cross-correlation in that the former *does not* suppress multiple scattering.  3D Cross-Correlation supresses multiple scattering AND effects of detector after-pulsing.

Pseudo-Cross Correlation should ideally be performed ‘in the scattering plane’ due to its more robust alignment, improved efficiency and improved polarization control in comparison to using a single beam of the 3D geometry.  This is of even greater importance when working at extreme temperatures and/or with very weakly scattering samples. This high performance Pseudo-Cross setup is provided as a default feature with the LS Spectrometer.

Which correlation mode should I use?

The best correlation mode to choose for your measurement depends on 3 parameters:

1.  Sample turbidity:  For samples displaying any multiple scattering, you must measure in either 3D cross-correlation or Modulated 3D cross-correlation.  If the cross-correlation intercept measured for this sample (at all scattering angles of interest) is less than that for a dilute reference sample (same refractive index solvent as in sample) then multiple scattering is present.  You must be careful to measure across all angles of interest as the presence of multiple scattering can have a great angular dependence.  If the intercept is the same as that of the dilute reference sample across all angles of interest, then you can safely measure in pseudo cross-correlation or autocorrelation (you cannot use autocorrelation if you have a sample that displays fast dynamics - see below).

2.  Sample dynamics:  If you have for example a small particle dispersion (say 30nm radius) then you will notice that the correlation function decays very quickly.  This is because the particles are moving very quickly in the solvent due to Brownian motion. However, if you have a large particle dispersion or a viscous solvent you will notice that the correlation function decays very slowly.  Similarly, the slow decay is caused by the slow motion of the scatterers in the sample. 

3.  Scattering angle:  Due to the fact that the length scales being probed by a light scattering experiment depend on the scattering angle, you will notice that the correlation function measured of a sample at high angle (eg 140 °) will decorrelate much faster than that measured for the exact same sample at low scattering angle (eg 30 °). 

For very fast decays of the correlation function (caused usually by the combination of fast dynamics and high scattering angles) you will notice that the Modulated 3D mode may be insufficient because it fails to capture very short lag times of the correlation function.  In such a case you should use 3D Cross rather than Mod3D since the standard 3D mode can measure at timescales 2 orders of magnitude smaller than for Mod3D.  If you can verify that the sample does not exhibit multiple scattering (again by comparing the cross-correlation intercept of your sample to a dilute reference sample), then you can also measure in Pseudo-Cross Correlation mode.  Auto-Correlation measurements are not valid for samples displaying fast dynamics due to detector after-pulsing, causing a sharp rise of the correlation function at short lag times.

How can one optimize the detector count rate?

The easiest and most reliable way to optimize measurement count rate is to use the automated intensity regulation feature available on your (3D) LS Spectrometer. 

Ideally, measurements should in general be made below about 500 kHz and above approximately 100x the detector dark count (> 250 Hz).  The APDs have a strong non-linearity that affect both the count rate and the intercept.  At 5 MHz, the actually intensity is nearly 1.5x of what the APD measures.  At 1 MHz, the correction is on the order of 3%.  In the ideal range of 200-500 kHz the non-linearity is negligible.

Count rate optimization should be performed in the same way in all correlation modes.  In cross correlation, there is twice the light (two beams instead of one) entering the scattering volume, so the intensity will always be ~2x.  For both modes, we advise setting the intensity in the range of 200 - 500 kHz.  If you exceed ~500 kHz, the intercept will start to slowly decrease and you will begin to slowly lose linearity of the signal. 

The Modulated 3D scheme cuts the total amount of light entering the scattering volume by about a factor of 2: each of the two beams is on 50% of the time. In this scheme, we are throwing away some of the detected light.  These photons that get discarded do not show up in the count rate trace in the instrument software and do not affect the correlation function.  As a result, the APD itself may be 'seeing' 500 kHz of light, but the amount of light actually reported in the software (after throwing away unwanted photons) is maybe only 200 kHz.  Thus when running a measurement in Modulated 3D at 500kHz, the actual intensity hitting the APDs is much higher, and already some ways into the non-linear regime (nonlinearity at 1500kHz for example is something like 5%).  To figure out how much light is actually arriving to the APDs, the easiest technique is to quickly switch to another correlation mode (such as 3D Cross or Pseudo-Cross) in which no photons are being thrown away, yet the incident light on the scattering volume is approximately the same.  This is in some ways a limitation of the Modulated 3D technique.  However, one must remember that 75% of the photons in the 3D Cross mode are just noise, thus the effective photon statistics of a measurement in Modulated 3D are approximately equal to that of 3D Cross.  More details on the Modulated 3D technique can be found here: https://lsinstruments.ch/en/theory/dynamic-light-scattering-dls/modulated-3d-cross-correlation.

What type of cuvettes should I use?

The LS Spectrometer is supplied with two sets of cylindrical cuvettes with 10 mm and 5 mm width respectively. You may choose either cuvette according to the sample volume available.

SLS and DLS measurements on highly turbid samples are possible using a 3D LS Spectrometer equipped with the Sample Goniometer option. Using this configuration, the light scattering experiment is performed close to the corner of a square cell, thus reducing the laser path length through the sample down to 0.2 mm.

All cuvettes and other consumables can be ordered from our online shop: https://shop.lsinstruments.ch/.

Making measurements at high count rates (> 500 kHz)

LS Instruments recommends not making high count rate measurements (> 500 kHz avg) if not necessary due to complexities arising from photon detector non-linearities and furthermore strongly discourages users from making measurements over 2MHz.  However, operation in the range of 500 kHz - 2 MHz average count rates can be very useful if trying to reduce measurement times.
 
While the detectors offered with the 3D LS Spectrometer are specified for operation up to 10 MHz, it is strongly suggested not to exceed 2 MHz for two reasons.  The first is that, in particular for samples with slow dynamics, the instantaneous intensity at the detector could reach close to the damage threshold of the device.  Second, the detector response non-linearity becomes relatively high and thus accurate correction of this becomes difficult.  Detector non-linearity arises from the short ‘dead-time’ after a photon detection event during which the detector is effectively turned off.

For DLS measurements, high count rates (500 – 2,000 kHz) will lead to a small reduction of the correlation intercept, yet require no correction for proper interpretation of the correlation function.  Statics measurements conducted at high count rates are subject to much greater complexities arising from the need to correct for the detector non-linearity.  The count rate as well as the intercept must be corrected.

1. Count Rate Correction:
If you look at the output file of an angle-dependent script, the intensity of both channels is given, as well as a 'mean CR * \sin(\theta)'.  The intensity values are those actually measured by the detectors.  The mean value is defined as:  \sqrt{(I_1 * I_2) * \sin(\theta)}.  The \sin(\theta) correction accounts for the angular dependence of the scattering volume.  In order to correct for detector non-linearity, you must first correct each channel and then recompute the geometric mean scaled by \sin(\theta).  ).  Please also view the section on properly correcting SLS measurements below.

To measure the non-linearity, make measurements of a strongly scattering sample for varying incident intensities.  Construct a plot of count rate versus incident intensity.  Extrapolate the plot to zero, normalize the low-intensity slope to 1, and then compute the deviation as a function of count-rate.  Fit this deviation vs count rate and use this function to correct the count rate data as described above.  This must be done separately for each detector - the non-linearity is a property of each specific detector.

2.  Intercept Correction:
The non-linearity of the detectors will cause a slight decrease in the cross-correlation intercept at high count rates.  While the cross-correlation intercept is angular-dependent, the non-linear correction that is applied is equivalent for all angles (i.e., \small \beta(\theta)* NLCF, where \beta\ is the cross-correlation intercept and NLCF is the non-linear correction factor).  Compute the non-linear correction factor by measuring a dilute (non-multiply-scattering) sample at any scattering angle at 6 to 9 different incident intensity levels such that the detector count rates approach 2-3 MHz.  Plot and fit the cross-correlation intercept as a function of the geometric mean of the uncorrected count rates i.e., \small \sqrt{(I_1 * I_2)}.

What are best practices for Cumulant analysis?

The bounds for the cumulant analysis should be set such that the lower bound (at short lag times) is well up in the plateau of the correlation function but does not include any low channel artifact (after-pulse noise), and the upper bound (at long lag time) is at a lag time for which the correlation value at that point on the curve is larger than the noise in the baseline of the correlation function but not as large as to render the cumulant expansion invalid. In practice the last channel should be kept as low as to stay out of the baseline and as low as to keep a good match between the fit and the measured correlation function. This can be assessed by checking the box “Show Cumulant Fit” in the analysis tab.  A general rule of thumb would be to set the 'decay factor' ('channel scaling method' in settings tab must be set to 'decay factor') to about 0.85-0.90, 0.60-0.70, and 0.40-0.50 for the third, second, and first order cumulant fits, respectively. 
For very turbid samples where the intercept falls, or for short duration or low-count rate measurements where there is substantial noise on the correlation function (which obviously you should try to avoid), it would be safer to lower the decay factor ranges given above.

How does the sample temperature relate to the circulator setpoint?

The temperature control system of the 3D LS Spectrometer consists of several elements.  The temperature is controlled by an external refrigerated/heated circulator.  This device circulates refrigerated/heated liquid through a custom heat-exchanger housing in the 3D which both holds and is immersed in the index-matching vat.  A platinum resistance temperature detector (RTD) probes the temperature in the index-matching vat and the sample temperature is displayed in the LSI software, LsLab using a custom calibration curve.  This calibration curve is necessary since the sample temperature deviates slightly from that measured in the vat due to a position-dependence of the vat temperature resulting from an inability to constantly mix the fluid. This calibration is performed upon installation of the instrument and should be re-performed yearly.  This calibration consists of measuring the relationship between the temperature output by the software and the actual temperature measured in the scattering volume (with a small temperature sensor which can be placed in the index matching fluid in the approximate location of the scattering volume).

Compatible external circulators can be controlled remotely via LsLab.  Such remote control enables temperature-dependent measurement scripts to be run and furthermore allows the user to have confidence that the sample temperature (not just the circulator temperature) has reached a steady state.  An additional custom calibration accounts for thermal losses between the circulator and the index matching vat.  This calibration is performed upon installation of remote temperature control and should be re-performed yearly. 

It should be noted that accurate light scattering measurements are only possible in the absence of convection.  Convection can occur in a sample that is not fully equilibrated or is overfilled.  Please see the section below on on How to make accurate measurements at high temperatures for more details. 

Can one correct the temperature/viscosity of a measurement?

In the situation where the solvent was not properly selected, the easiest solution is to modify the saved data files (.dat) with the correct viscosity and temperature, both of which are required to properly reanalyse the data.  You can then open this modified file in the LSI software and reanalyse the data. 

Procedure:
Determine correct solvent viscosity. Then, modify the header of your measurement file(s) with the correct temperature and viscosity as determined above. Save this file (keeping the .dat extension), open it in the LSI software, and then go to the 'Analysis' tab to re-analyse the data.  The new result can be saved. 

An example header as found in each measurement file looks as below, where the two values to modify are written in bold.  Important: the format of the file must not be changed, otherwise it will not properly load (ie, do not add extra tabs, returns spaces, etc):
Cross Correlation
Scattering angle: 30.0
Duration (s): 60
Wavelength (nm): 632.8
Refractive index: 1.330
Viscosity (mPas): 0.888
Temperature (K): 298.3
Laser intensity (mW): 3.114097
Average Count rate  A (kHz): 903.4
Average Count rate  B (kHz): 805.4
Intercept: 0.2508
Cumulant 1st 41.06
Cumulant 2nd 26.72 15.42
Cumulant 3rd 25.20 16.61

How to make accurate measurements at high temperatures?

Due to the off-axis incidence angle of the 2 beams of the 3D geometry, it is inherently more sensitive to changes in operating temperature (due to the temperature dependence of the refractive index of the index matching fluid) and sample refractive index. Thus it is normal to see gradual changes of the intercept with temperature or when moving between solvents with greatly different refractive indices.

For single DLS measurements this is not a great concern unless working at temperature extremes.  If you often make measurements at more than 30°C above or below ambient, we suggest considering the Focal Compensation option which automatically corrects the alignment for temperature-dependent effects.  Without this, high temperature measurements can suffer from a greatly reduced cross-correlation intercept.

For SLS measurements this issue is slightly more problematic.  A good set of reference measurements is critical to achieving an accurate normalization of measured SLS data.  For best results, we therefore recommend making reference measurements at the sample measurement temperature (within +/- 10° at which the measurement is made) and with a reference having the same solvent refractive index as that of the sample.  Use of the Auto Alignment Compensation module would be of great use here as well.

One further very important issue when working at temperatures far from ambient is the possibility of convection within your sample.  Sample cuvettes must always be filled to a level *lower* than that of the decalin index matching fluid into which they are inserted.  Otherwise, part of the sample will be in strong thermal contact with the ambient air, leading to temperature gradients within the sample.  Temperature gradients can lead to convection which will compete with the Brownian motion of the particles which laser light scattering is attempting to measure. At high temperatures, convection is nonetheless possible in the sample even if it is properly filled due to the fact that there are unavoidable temperature gradients (the magnitude of which scales with the difference in temperature between ambient and the instrument setpoint) in the index-matching bath arising from to the inability to mix this fluid. To minimize the possibility of convection, reduce your sample volume to the absolute minimum necessary.

How is the average intensity computed in the summary file?

The average intensity is computed as the geometric mean of the intensities of the two detection channels multiplied by sin(\theta_{scattering}) where \theta_{scattering} is the scattering angle.

In a perfectly aligned system, the angular dependence of the count rate for an isotropically scattering sample can be approximated by a sin(\theta) dependence to first order. For this reason we (and most other SLS instrument providers) present scattered intensity data as normalized by the quantity sin(\theta_{scattering}) when it is collected and summarized as a function of scattering angle. The reason for this approximate sin(\theta) dependence is due to the angular-dependent size of the scattering volume formed by the overlap of the incident beam and the focal volume of the detection optics.

Should my SLS reference measurement show no angular dependence?

In a perfectly aligned system the angular dependence of the count rate for an isotropically scattering sample can be approximated by a sin(\theta) dependence to first order. For this reason LS Instruments (and most other SLS instrument providers) present the scattered intensity data as normalized by the quantity sin(\theta_{scattering}) when it is collected and summarized as a function of scattering angle \theta_{scattering}. The reason for this approximate sin(\theta) dependence is due to the angular-dependent size of the scattering volume formed by the overlap of the incident beam and the focal volume of the detection optics.

In a typical SLS experiment, the measured angular-dependent intensity data will be normalized to a reference measurement, typically filtered toluene. For relative intensity measurements the purpose of this reference step is to normalize out the instrument response. The ideal sin(\theta) dependence could be assumed, but it is more precise to simply measure the actual instrument response. In this way the measured data is insensitive to any imperfections in the alignment.

There is always going to be *some* non-ideal angular dependence to the count rate regardless of what instrument you are using and what state the alignment is in. Clearly if the instrument is in a better alignment state, then this correction will be closer to the ideal sin(\theta) behavior. 

Due to the off-axis incidence angle of the 2 beams of the 3D geometry, it is inherently more sensitive to changes in operating temperature (due to the temperature dependence of the refractive index of the index matching fluid) and sample refractive index. It is often also more difficult to achieve a 'perfect' alignment for the 3D geometry.  A good reference measurement is therefore critical to achieving an accurate normalization of measured SLS data. For best results, we recommend making reference measurements at the sample measurement temperature (within +/- 10°C at which the measurement is made) and with a reference having the same solvent refractive index as that of the sample.  For more information on making measurements at elevated temperatures, please see the section above on How to make accurate measurements at high temperatures.

How to properly correct SLS measurements?

When making angle dependent intensity measurements, one should in general normalize the data to a reference measurement.  This reference is typically done with filtered toluene due to its isotropic scattering profile as well as having a known ‘Rayleigh Ratio’ (see here for more info:  https://lsinstruments.ch/en/theory/static-light-scattering-sls/excess-rayleigh-ratio) which is important for absolute measurements including molecular weight determination.   What must be done is to divide the measured data by the normalized reference data on an angle-by-angle basis.  Typically such a reference is re-measured every couple of months to check the instrument state and that single measurement is used to normalize all data.  The purpose of this normalization is to account for any alignment errors.  Proper procedure includes preparing a highly filtered sample of toluene (using a 0.22 micron hydrophobic PTFE filter) in the same geometry cuvette in which you will perform your measurement.

The above recommendations are typical for standard dilute light scattering measurements.  However, a number of special considerations exist when attempting to make statics measurements on turbid samples.  An exhaustive list of corrections to measured turbid statics data include:

Alignment reference (typically toluene): As an alignment reference, an aqueous ultra-small particle dispersion can be used in order to work with a reference with similar refractive index to the sample to be measured.  In many cases, particularly when working with sensitive square cell measurements, this can be important for proper normalization.

Solvent reference (sample solvent):  For turbid samples, a solvent reference is typically unnecessary since any scattering of the solvent is overwhelmed.
 
Dilute cross-correlation intercept reference (dilute small particle dispersion): The intercept reference can be any dilute particle dispersion, preferably with the same solvent refractive index as the sample.  It does not need to be a dilution of the actual sample – the purpose is only to measure the angular-dependent cross-correlation intercept in the absence of multiple scattering.

Turbidity reference (turbid small particle dispersion): A measurement of sample turbidity is essential for static measurements made in a square cell, and is ideally also made for round cell measurements.  This measurement is typically made with the same dispersion as used for the cross-correlation intercept reference, but at a much higher concentration as to achieve an intercept of approximately ½ that as for the dilute case.  Measuring the actual angular dependence of the turbidity rather than assuming an ideal alignment is preferable.  Please see the  application note on turbid statics measurements in a square cell for more details: https://lsinstruments.ch/en/applications/application-notes/characterizing-concentrated-colloidal-suspensions.

Non-linear correction: see the section above on Making measurements at high count rates.

It is important to note that measurements are in general dependent on temperature, sample cell geometry, and sample refractive index.  For best results, we recommend taking all references for conditions as close as possible to the measurement conditions.

DWS

Can one measure colored or absorbing samples with DWS?

The DWS RheoLab software from LS Instruments enables the measurement of weakly absorbing samples. It requires the user to enter the absorption length l_a in the 'Parameters' tab of the DWS RheoLab software. Moreover, for backscattering measurements, it requires additional input of the transport mean free path l^*. The absorption length l_a has to be measured previously in a separate experiment. We can distinguish two cases:

1) Absorbing solvent with non-absorbing tracer particles:
If we consider an absorbing solution (e.g. ink in water) without scatterers the transmission through a sample with thickness L is attenuated according to Beer-Lambert's law: exp(-L/l_a), where e=1/l_a is also sometimes called the extinction coefficient. Typically, l_a or e can be measured at the DWS RheoLab laser wavelength on a spectrophotometer. Note that for this spectrophotometric measurement no tracers must be added.

2) Absorbing tracer particles:
Samples which contain elements that both scatter and absorb (e.g. dye-doped particles) are significantly more difficult to measure. A spectrophotometer comprising an integrating sphere must be used and the data fitted to obtain both l_a and l^*. Please contact LS Instruments for more information in such cases.

How to choose tracer particles?

DWS can only measure the rheological properties of a sample, if light is scattered sufficiently in the sample such that the propagation of the light can be considered diffusive. Each photon that travels through the sample must be scattered by the particles in the sample many times, to ensure that the path it takes is totally random. Thus the sample needs to have a sufficient concentration of particles. Which means that it will have a milky or turbid appearance - milk is actually a perfect sample for DWS.
But if your sample is more or less transparent it obviously does not scatter light sufficiently (e.g. water). In this case, one can simply add particles to the sample to make it sufficiently turbid - we call these tracer particles.
Thus, if your sample is transparent you will need to do some sample preparation to use DWS. Despite this additional effort, results obtained with tracer particles however are typically the most precise, since you can choose exactly the particles best suited for your sample. Here are some guidelines to select the right tracer particles:

Size: As a rule of thumb, particles scatter best if they are approximately the size of the wavelength of the scattered light. The DWS RheoLab uses a laser with a wavelength of 685nm. Particles with a diameter from 200 – 800 nm diameter work fine. From experience, smaller particles usually give the better results, probably because the smaller they are, the less likely they are to affect the properties of the system. Nevertheless you should make sure that your particles are large enough to probe the structure of your sample! Imagine you are trying to measure a water based gel network that has a 700 nm mesh size. If you use particles that are much smaller than the mesh size, then 200 nm particles will not be restricted by the gel network. The small particles can simply float through the mesh. One will need particles of at least 700 nm diameter. Otherwise, all you measure it the rheology properties of water.
Furthermore, the particle size distribution of the tracers should be monodisperse for best results. Experience has shown that a Gaussian distribution about one mean size of +/- 30 nm is not a problem. But a bimodal distribution will not work.

Refractive index: The difference of refractive index of the tracer particles to the surrounding solvent significantly influences how strong these particles scatter light. The larger the difference, the stronger the particle will scatter.

Shape: The common algorithms used for DWS assume that the particles that are probing your sample are spheres performing Brownian motion. So if your tracer particles are rods or discs you will still get a result, but is probably not comparable to a mechanical rheometer. We have found that random shaped particles that have a sphere as average, still work fine. Titanium dioxide (TiO2) is a good example:  TiO2 particles typically look like asteroids or gravel. Each particle has a unique shape. In average, however, they are shaped like spheres and the DWS results obtained are very similar to the results obtained from perfectly spherical latex particles.

Concentration: How many tracer particles should one add to obtain sufficient scattering? The scattering properties of a sample are given by the “transport mean free path”, called l*. For a sample to be sufficiently turbid, L/l* (where L is the thickness of the cuvette) should be larger than 5 and no more than 40 when. You can calculate l* of your sample by using the online scattering calculator programmed by Pavel Zakharov and hosted at LS Instruments: https://lsinstruments.ch/en/learning#mie-calculator.

Chemical composition: The most common particles used are Latex (polystyrene), Titanium dioxide and Silica. But of course you need to take care that your tracer particles do not change the rheological properties of your sample when added. Furthermore, the sample should not affect the tracer particles. Latex is most sensitive as may easily be dissolved or swell in more aggressive solvents, but it works well in most water based samples. Ultimately, the question which tracer particle to use for which sample must be decided individually for each sample. Feel free to contact us for more information!

Density: As DWS assumes that particles are performing Brownian motion in the sample, they should not be much denser that the sample medium surrounding them. Otherwise they will sediment. TiO2 has a high density compared to that of water,  and only works well for short measurements in water-based systems, if the sample has been shaken before the experiment. If the sample has a higher viscosity, then sedimentation will be slower.

Dependency of the sample turbidity on particle size/material

DWS is applied to samples with high turbidity. It is thus important to understand which factors influence the turbidity of the sample. In general, for a given laser wavelength the turbidity of a sample consisting of dispersed particles depends on

-particle concentration
-particle size
-refractive index of the particle relative to the solvent

The turbidity can be quantified by the transport mean free path l^* the distance a photon travels until its propagation direction is randomized). In dilute conditions (concentrations > 5 % v/v) the turbidity is directly proportional to the particle concentration. However, the dependency on the particle size and refractive index is more complex. Note that an approximate value for l^* can be calculated using the Mie-Calculator, which might help for the planning of a new experiment: https://lsinstruments.ch/en/learning#mie-calculator.

It shows that particles with high refractive index (TiO2) yield samples which are much more turbid (smaller value of l^* than with moderate (PS) and low (SiO2) refractive index. Moreover, there is a strong size dependency with maximal turbidity for particles with diameter in the range of 200 to 500 nm. The stability and monodispersity is generally better for PS than for TiO2 particles. Therefore, as tracer particles in water-based systems, we recommend PS particles with diameter of 200 to 600 nm. If your system is not water-based, you must check if PS particles are stable in your specific system. Finally, PS particles in water are also recommended as reference sample for the DWS RheoLab due to their narrow size distribution.

What is the optimal fitting range - accessible frequency range in DWS?

The frequency range, where G'\((\omega\)) and G''\((\omega\)) can be measured, corresponds directly to the fitting range of the intermediate scattering function ICF \small g_1(\tau\) via \small \(\omega=2 \pi\ /\tau\). The optimal fitting range of the ICF is restricted to the time range where the ICF decays. The time range where the ICF is flat should not be included into the fit. A typical example is shown in the figure below.
 

Therefore, the accessible frequency range of G'\((\omega\)) and G''\((\omega\)) is determined by the measured sample dynamics, which depend in turn on the sample, but also on the measurement parameters. The most important points to consider are discussed below:

1) For samples with low viscosity, the accessible frequency range is shifted to higher values compared to samples of high viscosity.  The Brownian dynamics are faster and thus the decay in the ICF occurs at smaller values of \small \(\tau\), which corresponds to a shift of the accessible frequency range for G'\((\omega\)) and G''\((\omega\))  towards higher frequencies. 

2) The concentration and the scattering properties of the tracer particles influence the accessible frequency range. Higher scattering contrast and higher concentration of tracer particles result in a smaller value for the transport mean free path l^*. As a consequence, smaller displacements of the individual tracer particles lead to a decay in the ICF, which results in a shift of the decay towards smaller values of \small \(\tau\) and thus in a shift of the accessible frequency range for G'\((\omega\)) and G''\((\omega\)) towards higher frequencies. 

3) The cuvette thickness influences also the accessible frequency range of G'\((\omega\)) and G''\((\omega\)). Thicker cuvettes result in a decay of the ICF at smaller values of \small \(\tau\) and shift the accessible frequency range for G'\((\omega\)) and G''\((\omega\)) towards higher frequencies.

4) Note that the instrument imposes also "hard" limits on the accessible frequency range due to the correlator specifications. The hard limits are about 0.1 Hz (lower limit) given by the maximal lag time \small \(\tau\) of the correlator of about 20 s, and about 10 MHz (maximum) given by the correlator time resolution.

To summarize, the actual frequency range of G'\((\omega\)) and G''\((\omega\)) will be somewhere between 0.1 Hz and 10 MHz and depends mainly on the sample properties. However, to some extent, the accessible frequency range can be tuned, e.g. by choosing the tracer particle concentration and the cuvette thickness.