Fiveable

✈️Aerodynamics Unit 7 Review

QR code for Aerodynamics practice questions

7.6 Data acquisition and processing

7.6 Data acquisition and processing

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
✈️Aerodynamics
Unit & Topic Study Guides

Data acquisition fundamentals

Data acquisition (DAQ) is the process of measuring physical quantities with sensors and converting those measurements into digital data you can actually work with. In wind tunnel testing, this is where raw aerodynamic phenomena become numbers on a screen, so getting it right determines whether your results mean anything at all.

Sensors and transducers

Sensors detect a physical quantity (pressure, temperature, force) and produce a corresponding electrical signal. A transducer is the broader term for any device that converts energy from one form to another, such as mechanical deformation into a voltage change.

Common sensors in aerodynamics include:

  • Pressure transducers for measuring surface and freestream pressures
  • Thermocouples for point temperature measurements
  • Strain gauges for detecting force-induced deformation in balance systems

Choosing the right sensor matters. You need to match the sensor's range, sensitivity, and response time to the quantity you're measuring. And every sensor must be calibrated against a known reference before testing.

Signal conditioning

Raw sensor output is often weak, noisy, or in the wrong format. Signal conditioning prepares that output for digitization through several steps:

  • Amplification boosts the signal strength, improving the signal-to-noise ratio so small measurements don't get lost in background noise.
  • Filtering removes unwanted frequency content. Low-pass filters cut high-frequency noise, high-pass filters remove slow drift, and band-pass filters isolate a specific frequency range.
  • Excitation and bridge circuits supply the voltage or current needed by resistive sensors like strain gauges and RTDs. A Wheatstone bridge, for example, converts small resistance changes into measurable voltage differences.

Analog-to-digital conversion

An analog-to-digital converter (ADC) takes a continuous voltage signal and turns it into discrete digital values a computer can store and process.

Two key ADC specs to know:

  • Resolution (bits): A 12-bit ADC divides the input range into 212=40962^{12} = 4096 discrete levels. A 16-bit ADC gives 216=65,5362^{16} = 65{,}536 levels, meaning finer measurement precision but typically higher cost.
  • Sampling rate: How many times per second the ADC reads the signal.

Multiplexing lets a single ADC handle multiple sensor channels by switching between them in rapid sequence. This saves hardware cost but reduces the effective sampling rate per channel.

Sampling rate considerations

The sampling rate controls how frequently the analog signal is digitized. Choose it poorly and you'll either miss important signal content or collect unnecessarily large data files.

The Nyquist-Shannon sampling theorem sets the floor: your sampling rate must be at least twice the highest frequency present in the signal. If turbulent fluctuations contain energy up to 5 kHz, you need a minimum sampling rate of 10 kHz. Sampling below this threshold causes aliasing, where high-frequency content folds back and masquerades as lower frequencies, corrupting your data.

In practice, oversampling (sampling well above the Nyquist rate) is common because it improves signal-to-noise ratio and gives your digital filters more room to work. Base your sampling rate on the expected frequency content: steady-state force measurements might only need hundreds of samples per second, while turbulence studies could require tens of thousands.

Wind tunnel instrumentation

Wind tunnels create controlled flow conditions so you can isolate specific aerodynamic effects. The instrumentation inside measures pressure, temperature, velocity, and forces on the test model.

Pressure measurement devices

Pressure is one of the most fundamental measurements in aerodynamics. It tells you about aerodynamic loads, flow speed, and surface pressure distributions.

  • Pitot-static tubes measure total pressure (stagnation) and static pressure simultaneously. The difference gives you dynamic pressure q=PtotalPstaticq = P_{\text{total}} - P_{\text{static}}, from which you can calculate airspeed.
  • Pressure taps and scanners are small holes drilled into a model's surface, connected by tubing to a multi-port pressure scanner. These map the pressure distribution across the model, often at dozens or hundreds of locations.
  • Pressure-sensitive paint (PSP) provides continuous, high-resolution surface pressure maps. It works because the luminescent molecules in the paint are quenched by oxygen: higher surface pressure means more oxygen, which reduces luminescence. A camera captures the intensity field and converts it to a pressure map.

Temperature measurement devices

Temperature data is important for heat transfer studies and for characterizing boundary layer behavior, especially in high-speed flows where aerodynamic heating matters.

  • Thermocouples (e.g., Type K, Type T) are inexpensive and widely used for point measurements. They generate a voltage proportional to the temperature difference between the measurement junction and a reference junction.
  • Resistance temperature detectors (RTDs) offer better accuracy and long-term stability than thermocouples but respond more slowly to rapid temperature changes.
  • Infrared (IR) cameras provide non-intrusive, full-surface temperature maps. They're particularly useful for detecting boundary layer transition, since turbulent regions transfer more heat and show up at different temperatures.

Velocity measurement techniques

Characterizing the flow field means measuring velocity at many points, often resolving turbulent fluctuations.

  • Hot-wire anemometry uses a very thin heated wire (typically a few micrometers in diameter). Airflow cools the wire, and the electronics maintain either constant temperature or constant current. The power required to maintain the wire's state is related to flow velocity. Hot-wires have excellent frequency response (up to hundreds of kHz) but are intrusive.
  • Laser Doppler velocimetry (LDV) and particle image velocimetry (PIV) are both non-intrusive, optical techniques. They track tiny seed particles carried by the flow using laser light. LDV measures velocity at a single point with high temporal resolution, while PIV captures a full 2D (or even 3D) velocity field in a single snapshot by correlating particle positions between two closely timed laser pulses.
  • Multi-hole probes (e.g., five-hole or seven-hole probes) measure local velocity magnitude and flow direction by comparing pressures at several ports arranged around the probe tip.

Force and moment balances

Force balances measure the total aerodynamic loads (lift, drag, side force, and the three moments) acting on a wind tunnel model.

  • Strain gauge balances detect tiny deformations in a precision-machined flexure element. Each flexure is instrumented with strain gauges wired in a Wheatstone bridge configuration.
  • Internal balances sit inside the model itself, connected to a sting or support strut. They're compact but must withstand the test environment.
  • External balances are mounted outside the test section. The model's support system transmits loads to the balance. These tend to be larger and can handle higher load ranges.

Accurate force measurement depends on careful calibration (applying known loads and recording the balance output) and precise alignment of the balance axes with the wind tunnel coordinate system.

Sensors and transducers, JSSS - Creep adjustment of strain gauges based on granular NiCr-carbon thin films

Flight test instrumentation

Flight testing measures aircraft performance and behavior in real atmospheric conditions. The instrumentation must be robust, lightweight, and capable of recording data at high rates during dynamic maneuvers.

Air data systems

Air data systems provide the basic flight parameters: airspeed, altitude, angle of attack, and sideslip angle.

  • Pitot-static systems work the same way as in the wind tunnel, measuring total and static pressure to derive airspeed and pressure altitude.
  • Alpha and beta vanes are small wind vanes mounted on the aircraft that physically deflect with the local flow to measure angle of attack (α\alpha) and sideslip angle (β\beta).
  • Air data booms extend sensors forward of the aircraft nose or wing to position them in relatively undisturbed freestream flow, reducing the influence of the aircraft's own pressure field.

Inertial navigation systems

Inertial navigation systems (INS) track aircraft position, velocity, and attitude by integrating outputs from accelerometers and gyroscopes. They don't depend on external signals, which makes them reliable in any environment.

The core hardware is the inertial measurement unit (IMU), which typically contains three-axis accelerometers and three-axis gyroscopes. Two main configurations exist:

  • Strapdown INS mount the IMU rigidly to the aircraft structure. All motion sensing is done computationally. These are lighter and more common in modern systems.
  • Gimbaled INS use a mechanically stabilized platform to keep the sensors aligned with a reference frame. They're more complex but were historically preferred for high-accuracy applications.

INS measurements drift over time due to sensor bias and integration errors, which is why they're typically fused with GPS data.

Global positioning systems

GPS provides absolute position and velocity by measuring the time delay of signals from multiple satellites. A receiver needs signals from at least four satellites to solve for three-dimensional position and time.

  • Differential GPS (DGPS) improves accuracy by using a ground-based reference station at a known location to correct for atmospheric and orbital errors. This can bring position accuracy from meters down to centimeters.
  • GPS/INS integration combines the absolute accuracy of GPS with the high update rate and short-term stability of INS, typically using a Kalman filter to optimally blend both data sources.

Telemetry systems transmit flight data in real time from the aircraft to a ground station, allowing engineers to monitor the test as it happens.

  • Data links use radio frequency (RF) or satellite communication channels to send data wirelessly.
  • Pulse code modulation (PCM) is a standard telemetry format that encodes analog measurements into digital data streams for transmission.
  • Real-time telemetry is critical for flight safety: engineers on the ground can spot anomalies immediately and call for test modifications or abort if needed.

Data processing techniques

Raw data from sensors is rarely usable as-is. Data processing converts those raw signals into meaningful engineering quantities and extracts the trends and patterns you care about.

Data filtering and smoothing

Filtering cleans up the signal by removing frequency content you don't want.

  • Low-pass filters are the most common in aerodynamic testing. They attenuate high-frequency noise while preserving the slower signal trend you're measuring.
  • Moving average filters replace each data point with the average of its neighbors. Simple and effective, but they can smear sharp features in the data.
  • Savitzky-Golay filters fit a polynomial to a sliding window of data points. They smooth the signal while better preserving peaks and higher-order features compared to a simple moving average.

Noise reduction methods

Beyond filtering, several techniques specifically target improving signal-to-noise ratio:

  • Ensemble averaging: Repeat the same test condition multiple times and average the results. Random noise cancels out while the true signal reinforces. If you average NN runs, random noise drops by a factor of N\sqrt{N}.
  • Spectral analysis: Transform the data into the frequency domain (via FFT) to identify and remove discrete noise sources. A common example is removing 60 Hz power line interference.
  • Wavelet denoising: Uses wavelet transforms to separate signal from noise in both time and frequency simultaneously. This is especially useful when the noise characteristics change over time.

Time-frequency analysis

Standard Fourier analysis tells you what frequencies are present but not when they occur. Time-frequency methods solve this.

  • Short-time Fourier transform (STFT) chops the signal into overlapping time windows and applies an FFT to each one. You get a spectrogram showing frequency content as a function of time, but the time and frequency resolution are linked: improving one worsens the other.
  • Wavelet transforms use variable-width basis functions (wavelets) that are narrow at high frequencies and wide at low frequencies. This gives better time resolution for fast events and better frequency resolution for slow ones.

Both methods are valuable for studying transient aerodynamic events like flow separation onset, buffet, or gust encounters.

Sensors and transducers, DAQ970A Data Acquisition System - Electronics-Lab

Statistical analysis of data

Statistical tools quantify the variability in your measurements and help you draw valid conclusions.

  • Mean and standard deviation describe the central value and spread of your data.
  • Probability density functions (PDFs) show the distribution of measured values. Turbulent velocity fluctuations, for example, often follow a roughly Gaussian PDF.
  • Correlation and regression analysis identify relationships between variables (e.g., does drag coefficient correlate with surface roughness?).
  • Hypothesis testing and ANOVA let you determine whether observed differences between test conditions are statistically significant or just due to random scatter.

Error analysis and uncertainty

Every measurement has some degree of error. Uncertainty analysis quantifies how much you can trust your results and is a required part of any rigorous experimental study.

Sources of measurement errors

Measurement errors fall into several categories:

  • Systematic errors (bias): Consistent offsets from the true value. A pressure transducer that always reads 0.5 kPa high has a bias error. These can often be corrected through calibration.
  • Random errors (precision): Scatter in repeated measurements caused by noise, turbulence, or other unpredictable factors. These can be reduced by averaging.
  • Calibration errors: Inaccuracies in the reference standards used during calibration propagate directly into your measurements.
  • Environmental errors: Changes in ambient temperature, pressure, or humidity during a test can shift sensor readings if not accounted for.

Bias vs. precision errors

Understanding the distinction between bias and precision is fundamental:

Bias affects accuracy (how close your average measurement is to the true value). Precision affects repeatability (how tightly your repeated measurements cluster together).

A measurement can be precise but inaccurate (tight grouping, wrong center), accurate but imprecise (correct on average, lots of scatter), or ideally both accurate and precise.

  • Bias errors are corrected through calibration against known standards.
  • Precision errors are reduced by averaging multiple measurements.

Propagation of uncertainty

When you calculate a derived quantity from several measured inputs, each input's uncertainty contributes to the final result. Uncertainty propagation tells you how much.

Taylor series method (RSS method): For a result RR that depends on measured variables x1,x2,,xnx_1, x_2, \ldots, x_n, the combined uncertainty is:

uR=i=1n(Rxiuxi)2u_R = \sqrt{\sum_{i=1}^{n} \left(\frac{\partial R}{\partial x_i} \cdot u_{x_i}\right)^2}

Each partial derivative Rxi\frac{\partial R}{\partial x_i} is a sensitivity coefficient that tells you how strongly the result depends on that particular input. Inputs with large sensitivity coefficients dominate the overall uncertainty.

Monte Carlo method: Instead of analytical derivatives, you randomly sample each input variable from its probability distribution thousands of times, compute the result each time, and build up a statistical distribution of the output. This approach handles nonlinear relationships and complex distributions more naturally than the Taylor series method.

Confidence intervals and hypothesis testing

  • A confidence interval gives a range likely to contain the true value. A 95% confidence interval means that if you repeated the experiment many times, about 95% of the intervals you'd compute would contain the true parameter.
  • Hypothesis testing evaluates a specific claim using your data. The null hypothesis (H0H_0) represents the default assumption (e.g., "there is no difference between these two configurations"). The alternative hypothesis (HaH_a) is what you're trying to demonstrate.
  • The p-value is the probability of seeing your data (or something more extreme) if H0H_0 were true. A small p-value (typically below α=0.05\alpha = 0.05) leads you to reject H0H_0 in favor of HaH_a.

Data visualization and interpretation

Visualization turns columns of numbers into something you can actually interpret. Good plots reveal trends, outliers, and relationships that are invisible in raw data tables.

Graphical representation of data

Different plot types serve different purposes:

  • Line plots show how a variable changes continuously (e.g., CLC_L vs. angle of attack).
  • Scatter plots display discrete data points and are useful for spotting correlations or outliers.
  • Bar charts compare values across distinct categories or test configurations.
  • Contour plots and surface plots map a variable across two spatial dimensions, such as pressure coefficient distribution over an airfoil surface or velocity magnitude across a wake survey plane.

Trend identification and analysis

Once you've plotted the data, you need to characterize the trends quantitatively.

  • Linear regression fits a straight line (y=mx+by = mx + b) and works well when the relationship between variables is approximately linear.
  • Nonlinear curve fitting uses polynomial, exponential, logarithmic, or other functions when the relationship is clearly curved.
  • Residual analysis checks how well your fit matches the data. Plot the residuals (measured minus fitted values) and look for patterns. Randomly scattered residuals suggest a good fit; systematic patterns mean your model is missing something.

Comparison of experimental and theoretical results

Comparing your wind tunnel data to CFD predictions or analytical theory is one of the most valuable parts of experimental aerodynamics.

  • Overlay plots place experimental data points and theoretical curves on the same axes for direct visual comparison.
  • Difference plots show the discrepancy between experiment and theory at each data point, making systematic deviations easier to spot.
  • Quantitative metrics like root-mean-square error (RMSE) and correlation coefficient (R2R^2) summarize the overall agreement in a single number.

Reporting and presenting findings

Your results are only useful if other people can understand and trust them.

  • Figures and tables should have clear labels, units, and descriptive captions.
  • Always include error bars or confidence intervals on your plots so readers can judge the reliability of each data point.
  • The discussion section should interpret results in context: How do they compare to previous studies? What are the limitations? What physical mechanisms explain the observed trends?
  • Conclusions should summarize the key findings, state their significance, and suggest directions for future work.