Skip to main content

CS4226 Cheatsheet

Performance Metrics

  • Bandwidth / Link Rate / Capacity
    • Maximum Theoretical amount
  • Throughput
    • Maximum Feasible amount
      • Carries additional End-to-end delays
        • transmission: (packet size) / (link rate)
        • propagation: travel time in link
        • processing
        • queueing
          • Little's Law: L=λWL = \lambda W
            • E[# of customers] = Rate x E[time in system]
            • L = E[# of customers] (LENGTH)
            • λ\lambda: Arrival RATE
            • W: E[time in system] (WAITING / sojourn TIME)
          • Model difference between arrival times (TT) as continuous independent RV (exponentially distributed)
            • Note: Processing time is also exponentially distributed (Q16, Midterms)
            • P(T>t)=eλtP(T > t) = e^{-\lambda t}
            • P(Tt)=1eλtP(T \leq t) = 1-e^{-\lambda t}
            • Mean difference: 1λ\frac{1}{\lambda}
            • Memoryless: P{T>s+tT>s}=P{T>t}P\{T>s+t|T>s\} = P\{T>t\} (W2 Slide 7)
              • Given s time has elapsed, P(T>t)P(T>t) is the same as P(T>t+s)P(T>t+s)
              • Proof: Tut2 Q2
              • Time elapsed doesn't affect probability
            • Merging: Expected minimum waiting time of multiple events/flows
              • E[min(T1,T2)]E[\min(T_1, T_2)]
              • P(min(T1,T2)>t)=P(T1>tT2>t)=eλ1teλ2t=e(λ1+λ2)tP(\min(T_1, T_2) > t) = P(T_1 > t \wedge T_2 > t) = e^{-\lambda_{1} t} e^{-\lambda_{2} t} = e^{-(\lambda_{1} + \lambda_{2}) t}
                • Mean: 1λ1+λ2\frac{1}{\lambda_1 + \lambda_2}
            • Comparison: Probability that exponential RV X > exponential RV Y (i.e. P(X > Y)):
              • λyλx+λy\frac{\lambda_y}{\lambda_x+\lambda_y}
            • Poisson Distribution: Number of times event occurs in an interval of time Pr(X=k)=λkeλk!\operatorname{Pr}(X=k)=\frac{\lambda^{k} e^{-\lambda}}{k !}
              • λ\lambda: rate (of arrival)
              • k: number of occurences
          • M/M/1:
            • Arrival Rate λ=μ1E[W]\lambda = \mu - \frac{1}{E[W]}
            • Server Processing Rate μ=λ+1E[W]\mu = \lambda + \frac{1}{E[W]}
            • Utilization ρ=λμ\rho = \frac{\lambda}{\mu}
              • % of time stable state server is busy (1 item in svr)
              • Probability that server is busy (1 item in svr)
              • Derivation of E[L]=λE[Ws]E[L] = \lambda E[W_s], where E[Ws]=1μE[W_s] = \frac{1}{\mu}
            • Exact number of customers πi=P(L=i)=ρi(1ρ)\pi_i = P(L=i) = \rho^i(1-\rho)
              • % of time that there is i things in queue + server
              • Probability that there is i things in queue + server
            • E[# things in queue + server]: E[L]=λE[Ws]=ρ1ρE[L]=\lambda E[W_s] = \frac{\rho}{1-\rho}
            • E[# things in queue]: E[Q]=E[L]ρ=ρ21ρE[Q]=E[L] - \rho = \frac{\rho^2}{1-\rho}
            • E[time in queue + server]: E[W]=1μλE[W]=\frac{1}{\mu-\lambda}
            • E[time in server]: E[Ws]=1μE[W_s] = \frac{1}{\mu}
            • E[time in queue]: E[D]=E[W]1μE[D]=E[W] - \frac{1}{\mu}
            • Server stability requirement: λ<μ\lambda < \mu
            • Burke's Theorem: Outgoing λ\lambda = Incoming λ\lambda
            • Jackson Network
              • 1) Define each arrival rate in terms of each other
              • 2) Solve all rates, starting with the rate that relies on constants first
              • 3) Calculate what you need to calculate

Resource Management

  • Max-Min Fairness: Maximize smallest flow
  • Resource R is Bottleneck for flow I iff both:
    • resource R is at 100% usage
    • flow I has largest amount of resource R
  • Water filling algo: Calculating theoretical max-min fairness for all resources
    • Given a resource and multiple flows for that resource:
    • Allocation to flow = capacity x flowweight/flowweightflow_{weight} / \sum_\forall flow_{weight}
    • When calculating final flow:
      • 1) Calculate all flows for each resource without carried over blockages.
      • 2) For each flow, set it to the minimum of its resources.
      • 3) If step 2 frees up a resource, allocate that resource to other flows.
      • 4) Repeat step 2 and 3 until all flows cannot increase anymore.
      • 5) Evaluate bottlenecks for resources: compare by largest normalized allocation (final flow amt) / (flow weight)
  • Generalized Process Sharing (GPS): Calc max-min fairness for 1 router
    • Router can process many packets at the same time
    • Same as water filling algo but on a time basis. At any time unit:
      • 1 packet: 100% resources to process packet
      • M packets: Weighted allocation to all M packets (larger weight = more compute priority)
    • Backlogged Flow: Has data in queue
      • GPS formalized (W3 Slide 18): Data processed is proportional to weight
    • Weight Definition 1:2=a:b\emptyset_1:\emptyset_2 = a:b: Larger weight = more compute priority
  • Weighted Fair Queueing (WFQ): Calc max-min fairness for 1 router (more realistic) (W3 Slide 23)
    • Packet-based, but doesn't mean you cannot split (See Tut5 Q1 last part, 1st arrival departs last lol)
      • Only trust the V(t)
    • Work-Conserving: Don't plan for future arrivals
    • Concept: Ensure packets finish as close as possible to GPS
    • Weight Definition a1=2=ba\emptyset_1=\emptyset_2 = b: Larger weight = more compute priority
      • Equate 2=b\emptyset_2 = b, 1=ab\emptyset_1=\frac{a}{b}
    • Virtual Time: # of time units completed so far if the system is running GPS
      • Global variable, 1 V(t) variable for all flows
      • Reset to 0 when all flows are empty
      • Calculate Virtual Finishing/Departure Time FpktF_{pkt}:
        • Actual Arrival/Current time tpktt_{pkt}
        • Virtual Arrival/Current time V(tpkt)V(t_{pkt})
        • Processing time SpktS_{pkt}
        • Virtual Finishing time:
          • if FpktV(tpkt):Fpkt=V(tpkt)+SpktF_{pkt} \leq V(t_{pkt}): F_{pkt} = V(t_{pkt}) + S_{pkt}
            • Intuition: Packet is overdue, process it soon
          • else: Fpkt=Fprev+SpktF_{pkt} = F_{prev} + S_{pkt}
            • Intuition: Packet is not due yet
        • Actual Finishing time: Just order the packets
      • See W3 Slide 38
      • init) V(t)=0,flows:Fprev=0V(t) = 0, \forall flows: F_{prev} = 0
      • 1) Track packets in flows and get all Active Flows (flows with active packets)
      • 2) Slope = 1 / (sum all active flow weights)
        • Treat Slope as V(t) to t conversion rate
        • A real second elapses for every Slope virtual time
      • 3) If event is arrival:
        • Virtual processing time = [(packet size) / (full processing rate)] / (flow weight)
        • 4) FiF_i = max(Fprev,V(tarr))\max(F_{prev}, V(t_{arr})) + (Virtual processing time)
      • 5) Use Slope to increment V(t) to next arrival / departure time, then repeat step (1)
    • Actual time elapsed = [nextV(t) - currV(t)] * 1/slope

SDN: Software Defined Networking

  • Networking paradigm
    • Without SDN (W4 Slide 5)
      • High Hardware & Software Coupling (Vendor-specific, proprietary)
      • Slow protocol standardization
      • Hard to innovate
      • Network admin is expensive and hard to debug
    • Goal: Abstract functionalities of router
      • +ve: Control plane can change independent of hardware
        • Competitive technology development
      • +ve: Control plane is now high-level, logically centralized
        • Easier network management
    • Router tasks:
      • Data Plane: "Switch fabric"; Processing and delivery of packets based on router's rules and endpoints
      • Control Plane: "Routing processor"; Determines how and where packets are forwarded
    • Infrastructure (W4 Slide 27)
      • Network Application: (Control Plane)
        • Uses Northbound Interface API specified by
      • Network Abstractions (Control Plane)
        • Gets Global network view from
      • SDN Controllers (Control Plane)
        • Uses Southbound Interface API specified by
      • Real Switches (Data Plane)
    • Abstraction Layers:
      • 1) Specification: Allow net app to express network behavior without having to implement it
      • 2) Distribution: SDN apps make decisions from a logically centralized POV of the network
      • 3) Forwarding: Implement any forwarding behavior, hiding hardware
    • OpenFlow Protocol:
      • Packet flow through OpenFlow Switch: 34-36. High level: 31
      • Flow Table: Slide 36