Number Systems and Codes
- Number Bases: An integer in base $r$ has digits from $0$ to $r-1$ and is evaluated as $\sum d_i , r^i$ (with fractional part $\sum d_j , r^{-j}$). Common bases include binary (base 2), octal (8), decimal (10), and hexadecimal (16). Each hex digit corresponds to 4 binary bits, and each octal digit corresponds to 3 binary bits () ().
- Conversions: To convert a decimal to base $r$, repeatedly divide the integer part by $r$ (remainders give the digits in reverse order) and multiply the fractional part by $r$ (integers parts give fractional digits). To convert from base $r$ to decimal, expand using positional weights (e.g. $1011_2 = 1\cdot2^3 + 0\cdot2^2 + 1\cdot2^1 + 1\cdot2^0 = 11_{10}$). For binary–hex or binary–octal conversions, group bits in 4s or 3s respectively ().
- Binary Arithmetic: Binary addition and subtraction follow the same principles as decimal. For subtraction, it’s common to use the two’s complement method: to subtract $B$ from $A$, take the 2’s complement of $B$ and add to $A$ (discard any final carry). Overflow in two’s complement addition is indicated by a carry into the sign bit that differs from the carry out.
- Complements: For a base-$r$ number with $n$ digits: the $(r-1)$’s complement is obtained by subtracting each digit from $r-1$ (for binary, this is the one’s complement: invert all bits). The $r$’s complement is obtained by adding 1 to the $(r-1)$’s complement (for binary, this is the two’s complement). For example, in 8-bit binary, $x$’s one’s complement is $\tilde{x}$ (bitwise NOT), and $x$’s two’s complement is $\tilde{x} + 1$. The range of an $n$-bit two’s complement number is $[-2^{,n-1},,2^{,n-1}-1]$ (Chapter 1a: binary, hex, Two’s Complement numbers | Postulate). (E.g. for 4 bits: $-8$ to $+7$.) In two’s complement, a quick way to form $-N$ is to invert all bits of $N$ and add 1.
- BCD and Other Codes: Binary-Coded Decimal (BCD) uses 4 bits to encode decimal 0–9 (weights 8-4-2-1). BCD codes 0000–1001 are valid (0–9) while 1010–1111 are invalid. Excess-3 code is an unweighted self-complementing code obtained by adding 3 to the decimal digit (e.g. Decimal 5 → BCD 0101 → Excess-3 = 1000). In self-complementing codes like Excess-3, the 9’s complement of a decimal digit is just the bitwise complement of its code. Gray code (reflected code) is a binary code where consecutive numbers differ in exactly one bit. For converting binary to Gray: keep the MSB the same, and for each next bit use XOR of the current binary bit with the previous binary bit. For converting Gray to binary: keep the MSB same, and for each next binary bit use XOR of all previous Gray bits. Example – Binary:
1001
(9) → Gray:1101
; Gray1101
→ Binary1001
. Gray codes are useful for minimizing errors in analog-to-digital conversions (since only one bit changes at a time). - Signed Representations: Besides two’s complement, other representations include sign-magnitude (MSB is sign, others magnitude; has dual +0/–0) and one’s complement (has +0/–0 as well). Two’s complement is most common for integer arithmetic because it has a unique zero and addition/subtraction circuits are simpler. In sign-magnitude, range is $[-(2^{,n-1}-1),,2^{,n-1}-1]$ (e.g. 4-bit sign-mag: $-7$ to $+7$ with two zeros).
- IEEE 754 Floating-Point: Real numbers are represented in normalized binary scientific notation: $; \pm 1.\text{fraction} \times 2^{\text{exponent}}$. A 32-bit single-precision float (binary32) has 1 sign bit, 8 exponent bits, and 23 fraction bits (IEEE Standard 754 Floating Point Numbers | GeeksforGeeks). The stored exponent is biased by +127. A 64-bit double-precision (binary64) has 1 sign, 11 exponent bits, 52 fraction bits (bias +1023) (IEEE Standard 754 Floating Point Numbers | GeeksforGeeks). The value is $(-1)^{\text{sign}} \times 1.\text{fraction}2 \times 2^{,(\text{exponent}-\text{bias})}$. Special cases: an exponent of all 1s (255 in single precision) with fraction 0 denotes $\pm \infty$; exponent all 1s with fraction ≠0 denotes NaN (not a number); exponent of all 0s denotes either 0 (if fraction=0) or a _denormalized number (leading 0.fraction with exponent = 1-bias, used to represent numbers just above 0). For example, for single precision: stored exponent 127 represents actual exponent 0, so the exponent field 0 (all bits zero) represents $2^{-126}$ for normalized numbers (with leading 1.fraction assumed) or denotes a denormal number if fraction bits are non-zero (Single-precision floating-point format – Wikipedia). In calculations, to convert a decimal to IEEE format: convert to binary, normalize to 1.x form, determine actual exponent $E$, then store biased exponent $E+127$ (for single), and encode the fraction bits. (Similarly, conversion from binary IEEE form to decimal involves extracting sign bit, exponent (subtract bias), and fraction.)
Boolean Algebra and Logic Gates
- Boolean Algebra Basics: Binary logic deals with two values: 0 (False) and 1 (True). The primary operations are AND (·), OR (+), and NOT (‾ or ’). A literal is a variable or its complement. Key Boolean algebra laws (identities) include:
- Identity Law: $A + 0 = A,$ $A \cdot 1 = A.$
- Null (Dominance) Law: $A + 1 = 1,$ $A \cdot 0 = 0.$
- Idempotent Law: $A + A = A,$ $A \cdot A = A.$
- Complement Law: $A + A’ = 1,$ $A \cdot A’ = 0$ (a variable ORed with its complement is 1; ANDed with complement is 0).
- Commutative Law: $A + B = B + A,$ $A \cdot B = B \cdot A.$
- Associative Law: $A + (B + C) = (A + B) + C,$ $A \cdot (B \cdot C) = (A \cdot B) \cdot C.$
- Distributive Law: $A \cdot (B + C) = A!B + A!C,$ $;;A + (B \cdot C) = (A + B) \cdot (A + C).$
- Absorption Law: $A + (A \cdot B) = A,$ $;A \cdot (A + B) = A.$
- Double Negation: $(A’)’ = A.$
- De Morgan’s Laws: $(A \cdot B)’ = A’ + B’,$ $;(A + B)’ = A’ \cdot B’$. These are useful for simplifying logic and for finding the inverse of conditions.
- Consensus (Redundancy) Theorem: $A B + A’ C + B C = A B + A’ C.$ This shows that a redundant product term can be eliminated (for example, the term $B C$ is redundant if $A B$ and $A’ C$ are present) () (). This helps identify and remove unnecessary terms in a sum-of-products expression.
- Duality Principle: Every Boolean identity has a dual obtained by swapping AND ↔ OR and swapping 0 ↔ 1. If an expression $F$ is true in Boolean algebra, its dual $F^D$ is also true. For example, the dual of $A + 0 = A$ is $A \cdot 1 = A$. This principle often allows deriving one form from another.
- Minterms and Maxterms: A minterm is an AND-term that includes all variables either in true or complemented form (e.g. $A \cdot B’ \cdot C$ in a 3-variable system); it corresponds to exactly one truth-table row where the function is 1. A sum-of-products (SOP) expression is a sum (OR) of minterms (or of product terms that may not be complete minterms but cover when the function is 1). Dually, a maxterm is an OR-term of all variables (e.g. $A + B’ + C$) corresponding to one false output row, and a product-of-sums (POS) is an AND of maxterms. Canonical forms refer to expressing a Boolean function as either a sum of minterms (canonical SOP) or a product of maxterms (canonical POS). Any Boolean function of $n$ variables can be written as an SOP of up to $2^n$ minterms or a POS of up to $2^n$ maxterms. In simplification, we often seek a minimal SOP or POS that covers all required minterms or maxterms with as few product/sum terms as possible.
- Logic Gates: Basic logic gates implement Boolean operations:
- AND gate: output is 1 only if all inputs are 1 ($Y = A \cdot B \cdot …$). Truth: 1·1=1, otherwise 0.
- OR gate: output is 1 if any input is 1 ($Y = A + B + …$). Truth: 0+0=0, otherwise 1.
- NOT gate (Inverter): output is the logical complement of the input ($Y = \bar{A}$). Flips 1↦0, 0↦1.
- NAND gate: output is NOT AND: $Y = \overline{A \cdot B}$. It gives 0 only when all inputs are 1; otherwise output is 1.
- NOR gate: output is NOT OR: $Y = \overline{A + B}$. It gives 1 only when all inputs are 0.
- XOR gate (Exclusive-OR): output is 1 if an odd number of inputs are 1. For two inputs, $Y = A \oplus B = A’B + A B’$ (true when exactly one of $A,B$ is 1).
- XNOR gate (Exclusive-NOR): output is 1 if an even number of inputs are 1 (including zero 1’s); for two inputs, $Y = \overline{A \oplus B}$, which is true when inputs are equal.
- Functional Completeness (Universality): A set of logic gates is functionally complete if any Boolean function can be implemented using only those gates (Functional completeness – Wikipedia) (Functional completeness – Wikipedia). The basic set {AND, OR, NOT} is complete. NAND and NOR are universal gates: each alone is functionally complete (Functional completeness – Wikipedia) (Functional completeness – Wikipedia) (e.g. any circuit can be built using just NAND gates or just NOR gates). In practice, {NAND} or {NOR} can produce AND, OR, and NOT via appropriate wiring, so digital systems often use NAND/NOR gates extensively. Other gate combinations that include a way to get both complementation and an AND/OR function are also complete (for example {OR, NOT} or {AND, NOT} are complete sets). Conversely, a set lacking the ability to invert or to form an AND/OR combination is incomplete – for instance, {AND, OR} alone is not complete because you cannot produce a NOT operation (Functional completeness – Wikipedia).
- Shortcut Boolean Techniques: Sometimes expressions can be simplified by recognizing XOR or XNOR patterns and using their properties (e.g. $A \oplus B \oplus B = A$ because $B\oplus B = 0$). Also, look for opportunities to apply absorption or redundancy theorems to eliminate terms quickly without full algebraic reduction. Boolean algebra is often used in proof or simplification questions – having these identities memorized and recognizing patterns (like $X + \bar{X}Y = X + Y$ which is a form of absorption) can save time in exams.
Logic Simplification and Karnaugh Maps
- Karnaugh Map (K-map): A K-map is a visual grid-based method for simplifying Boolean functions up to about 5-6 variables. It has $2^n$ cells for $n$ variables arranged so that adjacent cells differ in only one variable (Gray code ordering for cells). For example, a 3-variable K-map is a 2×4 grid and a 4-variable K-map is a 4×4 toroidal grid. Each cell corresponds to a minterm of the input variables (). The map is filled in with 1’s (for minterms where the function is 1), 0’s (for where the function is 0), and “–” for don’t care conditions (inputs that never occur or whose output can be either 0 or 1 without affecting the function’s actual use). Adjacent cells (including wrap-around adjacency at edges, since the map is considered cyclically adjacent on left-right and top-bottom edges) can be grouped to form simplified product terms (Karnaugh Maps – Rules of Simplification).
- Grouping Rules: Groups (sometimes called loops) must contain 1, 2, 4, 8, … (powers of 2) cells of value 1 (or “1/–” where – are treated as 1 for grouping). Valid groups can wrap around the edges of the map and may overlap if it leads to larger groups. The goal is to cover all the 1-cells with the fewest large groups, favoring the largest possible powers-of-2 groupings (Karnaugh Maps – Rules of Simplification) (Karnaugh Maps – Rules of Simplification). Key rules for forming groups: Every 1 should be covered at least once, and groups should be as large as possible (each group eliminates as many variables as possible). A single isolated 1 (not adjacent to any other 1) will be alone in a group of 1. A pair of adjacent 1’s forms a group of 2. 4 adjacent 1’s in a rectangular or wrap-around configuration form a group of 4, and so on. (An “octet” group of 8 eliminates 3 variables, a quad eliminates 2, a pair eliminates 1 variable in the simplified term.) Also, 1’s already grouped can be reused in additional groups if it allows larger grouping. Groups may overlap, and it’s often necessary to overlap to ensure all 1’s are in a group of some size (Karnaugh Maps – Rules of Simplification) (Karnaugh Maps – Rules of Simplification).
- Prime Implicants: In the context of K-maps, a prime implicant is a product term corresponding to a group of 1’s that cannot be combined with any adjacent 1’s to form a larger group. In other words, it’s a simplest (largest) grouping of 1s that yields a distinct product term. Each prime implicant represents one term in the minimal SOP (sum of products) candidate solution (). Example: In a function of $A, B, C, D$, a grouping that covers cells where $A=1$ and $B=0$ (while $C,D$ vary) would yield the prime implicant $A \cdot B’$.
- Essential Prime Implicants: An essential prime implicant (EPI) is a prime implicant that covers at least one 1-cell (minterm) that no other grouping covers. Such a minterm is not covered by any other prime implicant, so that implicant is essential for the final expression (). In solving, first identify all prime implicants, then find which of those cover any “uncovered 1’s” uniquely – those are essential and must be included in the final simplified expression. Remaining 1’s (if any) can then be covered by selecting some of the non-essential prime implicants in a minimal way.
- Don’t Cares: Don’t care conditions (marked “–” on K-maps) can be treated as either 0 or 1 in whatever way helps maximize grouping. You can include them in groups of 1’s if it allows a larger group (thus simplifying the expression), or ignore them if they would impede simplification. Since they represent conditions where the function output doesn’t matter, you have the freedom to use them advantageously.
- Simplification Strategy: Place 1’s and don’t-cares on the K-map. Then: (1) Find all essential groups – usually obvious large groups that cover isolated 1’s. (2) Cover remaining 1’s with the largest possible groups, favoring any still uncovered. (3) Make sure every 1 is covered by at least one group. Write a product term for each group: for an $n$-cell grouping, the term will be the conjunction of the variables that do not change within that group (a variable is 1 for all cells in the group or 0 for all cells in the group). For example, a 4-cell group covering cases where $A=1$ and $B$ varies, $C=0$ and $D$ varies yields the term $A \cdot C’$ (because $A$ is 1 and $C$ is 0 throughout that block, while $B,D$ both take 0 and 1 in the group and thus cancel out). The simplified SOP expression is the OR of all such terms. If a minimal POS is required instead, you would group 0’s (or treat 0’s as 1’s of the inverted function) and form sum terms dually.
- Hazards: When using K-maps, overlapping groups can also help eliminate static hazards (unwanted output glitches when inputs change) by including redundant groups to cover vulnerable transitions. A 1-hazard (glitch from 1→0→1) can be eliminated by ensuring that for any single-bit change of inputs, the circuit remains covered by a term. This sometimes means adding a prime implicant that isn’t strictly necessary for logical correctness but covers a transition (the consensus term). The K-map can reveal where adjacent 1’s are not all grouped together, indicating a potential hazard – adding the missing group (even if it covers already-covered 1’s) yields a hazard-free implementation (Digital Electronics/Lecture Karnaugh Map Reductions – Wikiversity).
Combinational Circuits
- Combinational vs Sequential: A combinational circuit produces outputs that depend solely on the current inputs (there is no memory of past inputs). This means for any given set of inputs, the output is immediately (or after propagation delay) determined by some boolean function of those inputs (Output $= f(\text{Input})$) (Adders and Subtractors in Digital Logic | GeeksforGeeks). In contrast, sequential circuits (next section) have memory elements and the outputs depend on past inputs as well. Key combinational building blocks include adders, decoders, multiplexers, etc., which are often used as components inside larger sequential systems.
- Decoder: A decoder is a circuit with $n$ inputs and $2^n$ outputs. It asserts (sets to 1) the output line corresponding to the binary number represented by its $n$ input lines, and sets all other outputs to 0. In other words, it decodes an $n$-bit binary code into a one-hot output. For example, a 3-to-8 decoder takes inputs $A_2 A_1 A_0$ and activates one of 8 outputs $Y_0$–$Y_7$ (with $Y_5=1$ indicating input $101_2$). Decoders are often used to generate minterms: each output represents a minterm of the inputs (e.g. $Y_5 = A_2\bar{A_1}A_0$ for input 5). To implement a Boolean function using a decoder, one can OR together the decoder outputs corresponding to the function’s minterms (since each output is 1 for one input combination) () ().
- Encoder: An encoder performs the reverse operation of a decoder. It has $2^n$ (or fewer) inputs and $n$ outputs. Exactly one input is assumed to be 1 at any time (or in case of multiple, a priority encoder decides which to encode). The outputs generate the binary index of the active input. For example, an octal-to-binary encoder takes 8 input lines and outputs a 3-bit binary number. If input $D_5$ is 1 (and others 0), the output ($A B C$) will be 101 (). In general, $A = D_4 + D_5 + D_6 + D_7,$ $B = D_2 + D_3 + D_6 + D_7,$ $C = D_1 + D_3 + D_5 + D_7$ for an 8-to-3 encoder (assuming $D_7$ is highest priority) (). Priority encoders output the code for the highest-priority active input and often have an extra output to indicate “no valid input” when all inputs are 0.
- Multiplexer (MUX): A multiplexer is a data selector. A $2^n$-to-1 MUX has $2^n$ data inputs, $n$ select inputs, and one output. The select lines choose which one of the data inputs is connected to the output. For example, a 4-to-1 MUX has 2 select lines $S_1,S_0$ and output $Y = D_0\bar{S_1}\bar{S_0} + D_1\bar{S_1}S_0 + D_2 S_1\bar{S_0} + D_3 S_1 S_0$ (this is a canonical SOP where each term is the data line ANDed with the select pattern that picks it). Multiplexers can implement arbitrary functions: any boolean function of $n$ variables can be realized by a $2^n$-to-1 MUX by using the $n$ variables as select lines and tying the $2^n$ data inputs to 0 or 1 as needed () (). For example, a 8-to-1 MUX can implement any 3-variable function by feeding the truth table values into the 8 data inputs. With an $(n+1)$-to-1 MUX (or using one input as a data line through an inverter), you can implement $n+1$ variable functions (). MUXes are very useful for building logic with a limited set of gates or for time-multiplexing data signals.
- Demultiplexer (DeMUX): The inverse of a MUX – a 1-to-$2^n$ demultiplexer takes one input and $n$ select lines, and routes the single input to one of $2^n$ outputs (all other outputs get 0). It’s essentially a decoder combined with a gating input. If the input is considered a “data” signal, a DeMUX can distribute that input to one of many outputs. DeMUXes are used in routing circuits and for implementing functions similarly to decoders (the output lines can be thought of as minterms gated by the input signal).
- Half Adder: An adder circuit produces a sum and carry from binary addition. A half adder adds two 1-bit inputs $A$ and $B$. It outputs Sum = $A \oplus B$ and Carry = $A \cdot B$. In other words, if $A$ and $B$ are both 1, the sum is 0 with a carry-out of 1 (since $1+1=10_2$); otherwise the carry is 0 and the sum is just the XOR (which is 1 if exactly one of $A,B$ is 1) (Half Adder Using Verilog – GeeksforGeeks). Example: $A=1,B=0$ gives Sum=1, Carry=0; $A=1,B=1$ gives Sum=0, Carry=1.
- Full Adder: A full adder adds three 1-bit inputs: $A$, $B$, and an incoming carry $C_{\text{in}}$. It produces $\text{Sum} = A \oplus B \oplus C_{\text{in}}$ and $\text{Carry-out} = (A \cdot B) + (B \cdot C_{\text{in}}) + (A \cdot C_{\text{in}})$. In words, it adds $A$ and $B$ and the carry-in, outputting a sum bit and a carry-out. For example, if $A=1,B=1,C_{\text{in}}=1$, then sum = 1 (odd number of 1’s) and carry-out = 1 (because at least two inputs are 1). A full adder can be built from two half-adders (add $A$ and $B$, then add $C_{\text{in}}$ to the sum) with an OR gate for carry-out. Multiple full adders can be cascaded to add multi-bit numbers (with each $C_{\text{out}}$ feeding into the next adder’s $C_{\text{in}}$). In an $n$-bit ripple-carry adder, $n$ full adders are chained so that the carry ripples from the LSB to MSB. The carry-out of the last stage is the final carry.
- Subtractors: Similar to adders, a half subtractor subtracts two bits $A – B$, outputting a Difference bit and a Borrow output. The difference is $A \oplus B$ (if $B$ is 1, it flips $A$), and borrow-out is $A’ \cdot B$ (meaning “to subtract B from A, a borrow is needed when $A=0, B=1$”). A full subtractor subtracts $A – B – B_{\text{in}}$ (where $B_{\text{in}}$ is an input borrow), producing Difference = $A \oplus B \oplus B_{\text{in}}$ and Borrow-out = $(A’ \cdot B) + (A’ \cdot B_{\text{in}}) + (B \cdot B_{\text{in}})$ (one formulation). Usually, instead of designing separate subtractors, we use adders with two’s complement: to subtract $B$ from $A$, take the two’s complement of $B$ and add to $A$ (this automatically handles the borrow/negative results).
- Adder Optimizations: The straightforward ripple-carry adder is simple but slow (carry propagates through all bits). Carry Lookahead Adders use additional logic to compute carries faster by using generate/propagate concepts: for each bit, define $g_i = A_i B_i$ (carry generate) and $p_i = A_i \oplus B_i$ (carry propagate). Then $C_{i+1} = g_i ; \lor ; (p_i \wedge C_i)$. Lookahead circuits use these to compute carry signals in parallel for groups of bits, achieving $O(\log n)$ carry delay instead of $O(n)$. For instance, a 4-bit carry lookahead unit can compute all four carry-outs almost at once, significantly speeding up addition (). Other adder types include carry-save adders (used to sum multiple numbers without immediately propagating carries) and prefix adders (e.g. Kogge-Stone) for very fast addition.
- Comparators: A comparator checks the relationship between two binary numbers. A simple 1-bit comparator outputs $A>B$, $A<B$, and $A=B$ signals (with exactly one true). For multi-bit numbers, comparators are built hierarchically or iteratively: e.g. an $n$-bit comparator can use cascaded compare logic. Typical outputs: Equal bit which is 1 if all bits compare equal, and Greater/Less which can be derived by looking at the highest-order bit at which the inputs differ. (In practice, one can build a 4-bit comparator using XOR gates for equality of each bit and priority logic for greater/less, or use arithmetic subtraction and check sign/zero of result).
- Common Combinational Blocks: Other common blocks include ALUs (Arithmetic Logic Units) which combine several operations (add, subtract, bitwise AND, OR, etc.) selectable by control inputs, Parity generators/checkers (which use XOR trees to produce parity bits – XOR of all data bits yields parity; e.g. a 4-bit even parity generator outputs $P = A \oplus B \oplus C \oplus D$ () ()), and code converters (like binary-to-BCD converters for display, Gray-to-binary converters, etc.). Many of these can be constructed from the above basic components or directly from logic gates using K-map simplification if needed.
Sequential Circuits (Flip-Flops & Counters)
- Sequential Logic Basics: Sequential circuits have memory; their outputs depend on both current inputs and past history (state). They are typically built using flip-flops (binary storage elements) interconnected with combinational logic. A synchronous sequential circuit changes state only at discrete clock ticks (synchronized by a global clock), whereas an asynchronous sequential circuit can change state at any time as inputs change (using feedback without a global clock). Most designs use synchronous logic for predictability. In synchronous sequential circuits, changes in input affect the stored state only on clock transitions (). Asynchronous sequential circuits are more prone to hazards and complexity, so they are less common in practice for large designs.
- Latches vs. Flip-Flops: Both are bistable devices (storage elements) but operate differently. A latch is level-sensitive: it is transparent (i.e., output follows input) when enabled, and holds its last value when not enabled. A flip-flop (edge-triggered latch) captures the input only on a triggering clock edge (rising or falling edge) and then locks the output until the next trigger. In simple terms, a flip-flop is typically built from two latches (master and slave) to be edge-triggered (). Latches are asynchronous (no clock required, just an enable signal level), while flip-flops are synchronous (clocked). Example: an RS latch is level-sensitive with Set and Reset inputs; an edge-triggered D flip-flop will sample the D input only at the clock edge. Latches can lead to transparency issues or race conditions in design, so edge-triggered flip-flops are generally preferred in synchronous systems.
- Flip-Flop Types: The standard flip-flops are defined by their characteristic tables (behavior) and excitation tables (inputs needed for a desired transition):
- SR (Set-Reset) flip-flop: Has two inputs S and R. On activation (clock edge for flip-flop or level for latch), if $S=1,R=0$ it sets output $Q$ to 1; if $S=0,R=1$ it resets $Q$ to 0; if $S=R=0$, $Q$ remains unchanged (holds state); $S=R=1$ is an invalid input combination (not allowed, as it would attempt to set and reset at once). This basic SR behavior is often implemented as a latch. Edge-triggered versions (master-slave or using gating) ensure stability during clock transitions.
- D (Data or Delay) flip-flop: Has one data input $D$. On the clock edge, it samples $D$ and transfers it to the output $Q$. Thus $Q^+ = D$ (next state equals input). If $D=1$, it sets; if $D=0$, it resets. It has no invalid state (any stable $D$ is fine). D flip-flops are the most common storage elements (used for registers, counters, etc.) because of their simplicity – they directly store the input bit.
- JK flip-flop: A refinement of SR that avoids the invalid state. Inputs $J$ and $K$ correspond to “set” and “reset” like $S$ and $R$, but when $J=K=1$, it performs a toggle, i.e. $Q$ flips to the opposite of its current state. The characteristic equation is $Q^+ = J\bar{Q} + \bar{K}Q$ (if $J=1,K=0$ set to 1; $J=0,K=1$ reset to 0; $J=K=0$ hold; $J=K=1$ $Q^+ = \bar{Q}$). JK flip-flops can implement T (toggle) behavior and are versatile. However, a JK latch can suffer from race-around if $J=K=1$ and the clock is high for an extended period – $Q$ might toggle repeatedly. This is cured by using a master-slave configuration or edge triggering, so that the input is latched first and then applied, preventing multiple toggles within one clock pulse (). In a master-slave JK, the “master” latch captures on one level of clock and the “slave” outputs on the opposite, thus ensuring one toggle per pulse.
- T (Toggle) flip-flop: Has one input T. If $T=1$, the flip-flop toggles its state on the clock edge; if $T=0$, it holds its state. This is essentially a JK with $J=K=T$, or a D flip-flop with $D = Q_{\text{prev}} \oplus T$. T flip-flops are useful for building counters that toggle on certain events (e.g. ripple counters). A T flip-flop can be made from a JK by tying $J=K=T$, or from a D by using an XOR with the current state.
- Flip-Flop Timing Parameters: In practical flip-flops, propagation delay is the time from clock edge (or input change for latches) to the output change. Setup time ($t_s$) is the minimum time before the clock edge that the input must be stable (not changing) for reliable capture. Hold time ($t_h$) is the minimum time after the clock edge that the input must remain stable. Violating setup/hold can cause metastability (the flip-flop output oscillates or settles unpredictably). When designing sequential circuits, ensure that data signals respect the flip-flop’s setup/hold with respect to the clock (often by adding delays or meeting timing constraints) – e.g. synchronous systems have a global clock period long enough to accommodate combinational propagation plus setup time.
- Counters: Counters are sequential circuits that cycle through a sequence of states (usually a binary sequence). An $n$-bit binary counter counts in binary from $0$ to $2^n – 1$ (total $2^n$ states) and then wraps around. Counters can be asynchronous (ripple) or synchronous. In an asynchronous (ripple) counter, the flip-flops are chained so that each flip-flop toggles based on the previous flip-flop’s output (e.g. each T flip-flop’s toggle input is driven by the output of the prior stage). The clock drives the LSB flip-flop, and each subsequent flip-flop toggles when the previous one goes from 1 to 0 (on its trailing edge) – effectively the toggle travels like a ripple. Ripple counters are simple but suffer cumulative propagation delay (each bit’s change waits for the previous to toggle) and can produce temporary spurious binary outputs during counting transitions. In a synchronous counter, all flip-flops receive the clock together and some combinational logic determines each flip-flop’s toggling or next state. This way, all bits update synchronously on the same clock edge, eliminating ripple delays () (). Synchronous counters are faster and outputs change in unison, at the cost of slightly more complex logic. A common design is to use T flip-flops where each bit toggles when all lower-order bits are at 1 (so that it mimics binary increment pattern).
- Mod-$N$ Counters: Many counters don’t use all $2^n$ states of an $n$-bit register. A mod-$N$ counter cycles through $N$ states (from 0 to $N-1$) then resets to 0. If $N$ is not a power of 2, the counter can be implemented by using the next power of 2 bits and then detecting when the count hits $N$ to asynchronously reset to 0 (or to load a recycle value). Alternatively, design a custom state machine with $n = \lceil \log_2 N \rceil$ flip-flops that goes through exactly $N$ distinct states (which might not be a full binary sequence). The unused states in the binary sequence can either be treated as don’t-cares (if using K-map design) or safely mapped to loop back to valid states. For example, a mod-10 (BCD) counter can be made with a 4-bit binary counter that resets to 0 when it reaches 1010 (decimal 10).
- Counter Implementations: Two special cyclic counters are the ring counter and Johnson counter: A ring counter with $n$ flip-flops circulates a single 1 (or 0) through the flip-flops. For example, a 4-bit ring counter with initial state 0001 will go 0001 → 0010 → 0100 → 1000 → back to 0001. It has $n$ states (if one-hot, $n$ states for n flip-flops) (). A twisted ring counter (Johnson counter) feeds the complement of the last flip-flop back into the first. It generates a sequence of length $2n$ states by cycling through patterns (for 4-bit, it goes through 8 states, e.g. 0000→1000→1100→1110→1111→0111→0011→0001→0000…). Johnson counters produce a Gray-code-like sequence and use all 0s and all 1s as well. They have $2n$ states using $n$ flip-flops (). Both ring and Johnson counters are often used for sequence generation (like scan patterns, LED chasers) and are inherently glitch-free on single-bit transitions.
- Applications of Counters: Counters can act as frequency dividers (a binary ripple counter divides clock frequency by 2,4,8,… at each stage), as timebase generators, or to create specific time sequences (like traffic light controllers). By adding some gating logic, counters can be made to count up, count down, or count through arbitrary sequences. For example, an up/down counter has a control input to choose counting up (increment) or down (decrement). The design would adjust the logic feeding each flip-flop (for synchronous) or the toggling behavior accordingly.
- Finite State Machines (FSMs): A sequential circuit can be abstracted as an FSM with a finite number of states, state transitions on each clock, and outputs that depend on the state (and possibly inputs). There are two common modeling types: Moore machines (outputs depend only on the current state) and Mealy machines (outputs depend on current state and current inputs). In a Moore FSM, each state has fixed outputs, while in a Mealy FSM, transitions can produce different outputs based on inputs. A consequence is that Mealy outputs can change immediately in response to input changes (even between clock edges, if the FSM is implemented synchronously, the output can change at clock edge but influenced by current input), whereas Moore outputs change only on state changes (after clock edges). Thus, Mealy machines tend to react faster to inputs (outputs can appear in the same cycle as an input change) but Moore machines are safer (outputs are stable except at discrete state changes). For design: decide on state encoding (binary, one-hot, Gray, etc.), derive the state transition table or diagram, then implement with flip-flops (for state memory) and combinational logic for next-state and output logic.
- Mealy vs Moore Example: Consider a sequence detector that outputs 1 when a certain pattern is recognized. A Mealy design might output 1 as soon as the last bit of the pattern is seen (in the same cycle), whereas a Moore design might wait until the next clock to enter a “pattern found” state (outputting 1 in that state). Generally: Mealy outputs = f(state, input), Moore outputs = f(state). In practice, any Mealy machine can be converted to Moore and vice versa (with possible increase in state count). Designers often choose Moore for simplicity and glitch-free outputs, or Mealy for minimum states. Tip: In exam questions, if asked which is which: Mealy’s output depends on input & state; Moore’s depends only on state (state machine – Mealy v/s. Moore – Stack Overflow) (Difference Between Mealy And Moore Machine (Comparison Table)).
- Sequential Design Considerations: When designing sequential circuits, ensure that you consider the initial state (flip-flops on power-up can start in 0 or 1 randomly unless forced by a reset). Many counters and FSMs include a reset input to set a known start state. Also consider state minimization (merging equivalent states to simplify the FSM) and state encoding (one-hot vs binary can affect complexity). One-hot (each state uses a dedicated flip-flop) often simplifies combinational logic at the cost of more flip-flops, whereas dense encodings use fewer flip-flops but more complex logic. Use Karnaugh maps or boolean algebra to simplify the next-state and output logic when implementing the FSM.
- Setup/Hold and Clocking: In high-speed sequential logic, pay attention to clock skew (the clock reaching different flip-flops at slightly different times) which can violate assumptions of simultaneity. Synchronous design practices (like using a single clock source and proper timing analysis) ensure reliable operation. Use registers (groups of flip-flops sharing a clock) to store multi-bit values like counters, and consider clock enable signals if you want a counter to hold its value (you can gate the clock or use an enable in the logic to conditionally update state). Avoid asynchronous feedback loops without careful analysis, as they behave like asynchronous sequential circuits. If you must use an asynchronous input (like a button press) in a synchronous system, synchronize it to the clock using flip-flops to prevent metastability.
With these notes – covering boolean algebra fundamentals, minimization techniques (K-maps), common combinational circuits, and sequential building blocks (flip-flops, counters, FSMs) – you should be equipped to tackle typical problems in digital logic design. This includes simplifying logic expressions, analyzing or designing circuits with gates, doing number base conversions and arithmetic, designing or analyzing counters and sequential circuits, and understanding the behavior of different logic components. Use the boolean laws for algebraic simplification, Karnaugh maps for quick minimization up to 4-5 variables, and systematically apply sequential logic design steps for flip-flop based circuits. Good luck with your exam preparations!
Be First to Comment