**Analog-to-Digital Converter (ADC or A/D Converter)**

The signals of the natural environment, such as voice, temperature, light ... and so on, are continuous-time and continuous-amplitude variable (analog) signals. Analog signals have infinite states available. But, the computers we are using are digital signal system. Digital signals have two states — on (1) or off (0). The analog signals need to be converted into a digital representations which can then be stored in a computer for further processing. An A/D Converter is a device that converts analog signals (usually voltage) obtained from environmental phenomena into digital format. In the other words, ADC digitize an analog signal by converting data with infinite states to a series of pulses. The amplitudes of these pulse can only achieve a finite number of states.

**Figure 1**: Basic Function of ADC

ADCs have the following key characteristics:

- Input
, or**range**, measured in Volts, is the range of voltages that can be applied at the input of the ADC.**Full Scale input Range** - Output range, or
, measured in bits, is the number of bits in each output sample (typical values are 8, 12, 16, 24, and 32). It is common to say an "8-bit ADC", meaning an ADC with output samples that are 8-bit wide.**quantization** , measured in Hz, is the rate at which the ADC generates its output samples.**Sampling rate**

**ADC Specifications**

**Full Scale Input Range**

The full scale input of an ADC is the largest signal amplitude that can be delivered to the converter before the signal is clipped in its digital output representation.

**Sampling Rate (***f*_{s})

*f*)

_{s}The sampling rate is the frequency expressed in Hertz (Hz) at which the ADC samples the input analogue signal.

The Nyquist Theorem (or Shannon Theorem) states that an analog signal with a highest frequency of * f_{a}* MUST be sampled at a sampling rate

*to avoid loss of information.*

**f**_{s}> 2 f_{a}- If
then a phenomena called aliasing will occur in the analog signal bandwidth.**f**_{s}< 2 f_{a}

**Resolution**

There are two types of resolution in the A/D Conversion: **bit-resolution** and **voltage-resolution**.

- The bit-resolution of an ADC refers to the number of bits in the digital output code of the ADC.
- The voltage-resolution is the minimum change in input voltage which can be resolved by the ADC. It is the same as analog quantization size.

**Voltage-Resolution, (or Quantization, Least Significant bit Voltage)**

The voltage resolution is represented the minimum change in voltage that is requires to guarantee a change in the output code level. It is also called the **least significant bit** voltage. The quantization of ADC (* Q*) is equal to the voltage resolution (or the LSB voltage).

The voltage resolution of an ADC is equal to its overall voltage measurement range divided by the total numbers of digital output:* Resolution_{Voltage} = Q = V_{LSB} = (V_{Ref,max} - V_{Ref,min}) / 2^{n} = V_{FSR} / 2^{n} * -------- (1)

If you are using an 8-bit ADC, and the voltage input range of the ADC is from 0 to 5 volts. A 8-bit digital value can represent 256 (2^{8} = 256) different numbers. In this example, the bit-resolution of the ADC is 8-bit, and it has a voltage-resolution of 20mV (10v/256).

**The Ideal Transfer Function for A/D Conversion**

The transfer function of an ADC is a plot of the voltage input to the ADC versus the code's output by the ADC. The horizontal axis represents a continuous analog input signal. The vertical axis shows the digital output codes which can be thought of as level for rounding off the analog input signal to its nearest digital equivalent. The full scale input range of the ADC is equally divided over the total number of digital output codes to transform the dashed red line to the staircase blue line as shown in the figure 2.

**Figure 2**: Ideal Transfer Function of a 4-bit ADC

The number of bit is the number of binary digits in the digital output code used to represent the full scale analog signal. In this example, we have 4-bit or binary digits ADC, so N = 4, and the number of codes here is 2^{4} = 16, which corresponds to counting from binary 0000 to 111 which is 0 to 15 in decimal, Here, we assume the output of the ADC is linear.

: the number of bit ADC**N**: the**V**_{FSR}**F**ull**S**cale input**R**ange of the ADC: the minimum resolvable voltage, or ADC voltage resolution**V**_{LSB}

The full scale input range in this example is 2V (0 to 2 V). The voltage resolution is often identified as the width of the least significant bit, it can be computed by taking the full scale range and dividing by the total number of bits, or 2 to the N power. Therefore, the least significant bit width, or voltage resolution can be calculated by dividing 2V by 16 to get a resolution of 0.125V.

The full scale range in this example is 2V, but the **maximum detectable input voltage** is the full scale range minus one LSB or 1.875V in this example.

**Math Conversion Equations**

The A/D Converter samples the analog signal and then provides a quantized digital code to represent the input signal. The digital output codes get post-processed and results can be reported to an operator who will use this information to make decisions and take actions. Therefore, it is important to correctly relate the digital codes back to the analog signals it represents.

In general, the ADC input voltage is related to the output code by a simple relationship, as shown in Equation 1:

---------- (2)

where * V_{m}* (V) is the ADC's input voltage or the measured voltage;

*is the ADC's digital-output code in decimal format, and*

**D**_{out}*is the voltage resolution, or the value of the least significant bit in the ADC code.*

**V**_{LSB}Equation 1 is a general equation that can work for any ADC. It doesn't matter if the ADC's output code is in straight binary or two's complement format, as long as the binary number is correctly converted to its equivalent decimal value.

From here, it is necessary to look at specific ADC architectures in order to determine the best ADC for the job. This includes:

- SAR (Successive Approximation Register) ADCs
- Flash ADCs
- Delta-Sigma ADCs

## SAR ADC

SAR ADC is the most commonly used method. It has smaller chip area, lower power consumption analog-to-digit conversion. It is frequently the architecture of choice for medium-to-high-resolution applications with sample rates under 5 megasamples per second (MSPS). Bit-resolution for SAR ADCs most commonly ranges from 8 to 16-bits. These features make the SAR ADCs especially suitable for following applications:

- Wearable, handheld and sensor devices
- Magnetic card reader
- High-speed data collection
- Power meter
- Pulse oximeter

**SAR ADC Architecture**

The SAR ADC basically implements a binary search algorithm. There are many variations for implementing a SAR ADC, the basic architecture is simple. The SAR ADC circuit typically consists of four chief sub-circuits, and it is shown in the Figure 3.

**Figure 3**: Simplified N-bit SAR ADC Architecture

- A sample and hold circuit to acquire the input analog voltage (
*V*)._{adc} - An analog voltage comparator that compares
*V*to the output of the internal DAC (_{in}*V*) and outputs the result of the comparison to successive approximate register (SAR logic)._{DAC} - A successive approximate register subcircuit designed to supply an approximate digital code fo
*V*to the internal DAC._{in} - An internal reference DAC that, for comparison with
*V*, supplies the comparator with an analog voltage equal to the digital code output of the SAR logic._{REF}

**Basic Operation of the SAR ADC**

The SAR ADC does the following steps for each sample:

- The analog input voltage (
) is sampled and held.**V**_{adc} - To implement the binary search algorithm, the N-bit register is first set the MSB to 1 (that is
**1000 ... 00**). This will set the DAC output (_{2}) to be**V**_{DAC}, where*V*_{ref}/ 2is the reference voltage provided to the ADC.**V**_{ref} - A comparison is then performed to determin if
is less than, or greater than,**V**_{in}**V**._{DAC}

- If
is greater than**V**_{in}, the comparator output is a logic 1, and the MSB of the N-bit register remains at**V**_{DAC}.**1** - Conversely, if
is less than**V**_{in}, the comparator output is a logic low and the MSB of the register is cleared to logic**V**_{DAC}.*0*

- If
- The SAR control logic then moves to the next bit down, forces that bit high, and does another comparison. The sequence continues all the way down to the LSB.
- Once all bits have been approximated, the N-bit digital approximation is output at the end of the conversion (EOC).

**Figure 4**: SAR Operation (3-bit ADC Example)

**Figure 4** shows an example of a 3-bit conversion. The y-axis represents the * V_{DAC}* voltage. Let us focus on the

**in the example, the first comparison sets MSB of the register to**

*V*_{in1}= 6.25 V**100**, and the comparison shows that

_{2}*. Thus, bit 2 is set to 0. The register is then set to*

**V**_{in1}< (V_{DAC}= V_{ref}/ 2)**010**and the second comparison is performed. As

_{2}*. Bit 1 is set to 0, and the register is then set to*

**V**_{in1}< (V_{DAC}= Vref / 4)**001**for the final comparison. Finally, bit 0 is set to 0 because

_{2}*. The digital result for*

**V**_{in1}< (V_{DAC}= Vref / 8)**is**

*V*_{in1}**000**.

_{2}Notice that a 4-bit ADC requires three comparison cycles. In general, a N-bit SAR ADC will require N comparison cycles and will not be ready for the next conversion until the current one is complete. This explains why these ADCs are power - and space-efficient.