Fiveable

🦾Biomedical Engineering I Unit 8 Review

QR code for Biomedical Engineering I practice questions

8.4 Neural Networks and Brain Modeling

8.4 Neural Networks and Brain Modeling

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🦾Biomedical Engineering I
Unit & Topic Study Guides

Biological Neural Networks

Neural networks and brain modeling give biomedical engineers the tools to simulate how the brain processes information, from individual neuron firing to large-scale cognitive functions. These computational approaches are central to understanding neurological disorders and designing therapies like deep brain stimulation and brain-machine interfaces.

Structure and Function of Neurons

Neurons are the fundamental units of the nervous system. Each neuron has three main parts:

  • Cell body (soma): Contains the nucleus and integrates incoming signals
  • Dendrites: Branch-like extensions that receive signals from other neurons and relay them to the cell body
  • Axon: A long fiber that carries electrical signals away from the cell body to other neurons or effector cells (muscles, glands)

When a neuron receives enough excitatory input, the electrical potential across its membrane reaches a threshold. At that point, an action potential fires and propagates along the axon. This is an all-or-nothing event: the neuron either fires fully or doesn't fire at all.

Synaptic Transmission and Plasticity

Synapses are the junctions between neurons where information passes from one cell to the next. The process works like this:

  1. An action potential arrives at the presynaptic terminal
  2. Neurotransmitters are released into the synaptic cleft
  3. These neurotransmitters bind to receptors on the postsynaptic neuron
  4. The postsynaptic neuron's electrical potential changes accordingly

Neurotransmitters can be excitatory (increasing the likelihood of firing) or inhibitory (decreasing it). This balance of excitation and inhibition shapes how neural circuits process information.

Synaptic plasticity refers to the ability of synapses to strengthen or weaken over time. It's the biological basis for learning and memory:

  • Long-term potentiation (LTP): Repeated stimulation strengthens a synapse, making signal transmission more efficient
  • Long-term depression (LTD): Lack of stimulation or repeated low-frequency activity weakens a synapse

Neural Network Organization and Connectivity

The brain is organized hierarchically, with different regions specialized for specific functions. The cerebral cortex, for example, contains distinct areas like the visual cortex, auditory cortex, and motor cortex.

Three types of connections link neural populations:

  • Feedforward connections transmit information from lower to higher levels of the hierarchy, enabling extraction of increasingly complex features
  • Feedback connections transmit information from higher to lower levels, allowing top-down modulation and attentional control
  • Lateral connections link populations within the same level, enabling coordination

These connectivity patterns determine how information flows through the brain and how complex behaviors like perception, decision-making, and motor control emerge.

Computational Models of Neural Activity

Spiking Neuron Models

Two major classes of models simulate how individual neurons fire:

Integrate-and-fire models are the simpler approach. They track how a neuron integrates incoming synaptic inputs over time, then generate a spike when the membrane potential crosses a threshold. These models capture the essential firing dynamics while staying computationally efficient.

Hodgkin-Huxley models provide a much more detailed, biophysically realistic description. They explicitly model the ionic currents (sodium and potassium) and their voltage-dependent gating mechanisms that produce action potentials. The trade-off is that they're significantly more computationally expensive.

The choice between these models depends on your goal: use integrate-and-fire for large network simulations, and Hodgkin-Huxley when biophysical accuracy at the single-neuron level matters.

Synaptic Plasticity Models

These models capture how synaptic strength changes based on neural activity patterns.

The Hebbian learning rule is the classic formulation: synapses strengthen when the presynaptic and postsynaptic neurons are active at the same time. The shorthand is "neurons that fire together, wire together."

Spike-timing-dependent plasticity (STDP) refines this idea by considering the precise timing of spikes:

  • If the presynaptic neuron fires before the postsynaptic neuron (a causal relationship), the synapse strengthens
  • If the order is reversed (acausal), the synapse weakens

The magnitude of the change depends on the time interval between the two spikes. STDP models are essential for understanding how learning and memory formation work at the circuit level.

Population and Brain Region Models

Modeling every individual neuron in a brain region isn't always practical. Population-level models describe the collective behavior of groups of neurons instead.

  • Wilson-Cowan equations and neural mass models track average firing rates of neural populations and their excitatory-inhibitory interactions. These are used to study cortical columns, thalamo-cortical loops, and large-scale brain networks.
  • Hopfield networks are fully connected recurrent networks that store and retrieve patterns as stable attractor states. Think of each stored memory as a "valley" in an energy landscape that the network settles into.
  • Attractor networks generalize this idea to multiple stable states representing different memories or cognitive states. The network can transition between states based on inputs and perturbations.

Neural Network Models for Cognition

Feedforward and Convolutional Neural Networks

Feedforward neural networks model pattern recognition and classification in the brain. A perceptron (single-layer network) can classify linearly separable patterns. Multilayer perceptrons add hidden layers, enabling the network to learn complex, nonlinear mappings between inputs and outputs.

Convolutional neural networks (CNNs) are directly inspired by the visual cortex's hierarchical organization. They use two key layer types:

  • Convolutional layers detect local features (edges, textures) using learned filters
  • Pooling layers reduce spatial dimensions and provide translation invariance (recognizing a feature regardless of its exact position)

CNNs have achieved strong performance in image classification, object detection, and facial recognition, and they serve as useful models for how the visual system extracts and represents features.

Recurrent Neural Networks and Temporal Processing

Recurrent neural networks (RNNs) have connections that loop back on themselves, allowing information to persist over time. This makes them suitable for modeling sequence processing, language understanding, and working memory.

Standard RNNs struggle with long sequences because gradients can vanish during training. Long short-term memory (LSTM) networks solve this with gating mechanisms that control what information to store, update, or discard. LSTMs have been successful in language modeling, machine translation, and speech recognition.

From a neuroscience perspective, RNNs can model working memory dynamics, where information is actively maintained and manipulated over short time scales.

Reinforcement Learning and Decision Making

Reinforcement learning (RL) models simulate reward-based learning, particularly the processes occurring in the basal ganglia and dopaminergic systems.

Temporal difference learning methods (like Q-learning and SARSA) work by:

  1. Predicting the expected future reward for a given state-action pair
  2. Observing the actual reward received
  3. Computing a prediction error (the difference between predicted and actual reward)
  4. Using that error to update value estimates and guide future action selection

This prediction error maps closely onto the phasic firing of dopamine neurons observed experimentally.

Actor-critic methods separate the system into two components:

  • The critic learns value functions (how good is this state?)
  • The actor learns action selection policies (what should you do?)

This mirrors the roles of the ventral and dorsal striatum in the basal ganglia. RL models explain phenomena like reward prediction, temporal discounting, and exploration-exploitation trade-offs.

Attentional Mechanisms

Attention can be modeled as a gating mechanism that selectively enhances or suppresses neural population activity based on task relevance.

  • Top-down attention: Signals from higher cortical areas (like the prefrontal cortex) modulate sensory areas to prioritize task-relevant information
  • Spatial attention: Models simulate how attentional resources are allocated to different locations in the visual field, consistent with the "spotlight of attention" phenomenon
  • Feature-based attention: Models capture selective enhancement of specific features (color, orientation) relevant to the current task

These mechanisms help explain how the brain filters distractions and supports goal-directed behavior, and they've inspired attention modules in modern artificial neural network architectures.

Brain Modeling for Neurological Disorders

Identifying Neural Mechanisms and Circuitry

Computational models can simulate the pathological processes underlying major neurological disorders:

  • Alzheimer's disease: Models incorporate the accumulation of amyloid-beta plaques and tau tangles, simulating their impact on synaptic function, neuronal survival, and cognitive decline
  • Parkinson's disease: Models simulate the degeneration of dopaminergic neurons in the substantia nigra and the resulting impairments in motor control and reinforcement learning
  • Epilepsy: Models capture the abnormal synchronization and spatial spreading of neural activity that drives seizure generation and propagation

By reproducing disease-related changes computationally, researchers can test hypotheses about which circuit-level disruptions cause specific symptoms.

Developing Novel Therapies and Interventions

Neural network models help design and optimize therapeutic approaches:

  • Pharmacological modeling: Simulating the effects of drugs targeting specific neurotransmitter systems (dopamine, serotonin) or receptor types (NMDA, GABA) on neural dynamics and behavior
  • Deep brain stimulation (DBS): Models optimize electrode placement and stimulation parameters for disorders like Parkinson's disease and OCD
  • Non-invasive stimulation: Transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS) models guide protocol design for treating depression, anxiety, and other psychiatric conditions
  • Brain-machine interfaces and neuroprosthetics: Neural network models help restore sensory, motor, or cognitive functions in individuals with neurological impairments

Computational models also support the design of closed-loop systems that adapt stimulation parameters in real time based on monitored neural activity and clinical outcomes.

Personalized Brain Models and Targeted Therapies

Personalized models incorporate individual-specific data to tailor treatment strategies. The process typically involves:

  1. Acquiring neuroimaging data: diffusion tensor imaging (DTI) for structural connectivity and functional MRI (fMRI) for functional networks
  2. Constructing individualized connectivity matrices from this data
  3. Simulating disease-specific processes (neurodegeneration, seizure spread) on the patient's own network
  4. Identifying critical nodes or pathways for targeted intervention

These models can predict how pathology will spread in a specific patient and how that patient might respond to different treatments. By integrating multi-modal neuroimaging, genetic information, and clinical assessments, personalized brain models support precision medicine approaches: selecting optimal treatment strategies, drug dosages, and stimulation parameters for each individual to improve outcomes and minimize side effects.