- Bandwidth / Link Rate / Capacity
- Maximum Theoretical amount
- Throughput
- Maximum Feasible amount
- Carries additional End-to-end delays
- transmission: (packet size) / (link rate)
- propagation: travel time in link
- processing
- queueing
- Little's Law: L=λW
- E[# of customers] = Rate x E[time in system]
- L = E[# of customers] (LENGTH)
- λ: Arrival RATE
- W: E[time in system] (WAITING / sojourn TIME)
- Model difference between arrival times (T) as continuous independent RV (exponentially distributed)
- Note: Processing time is also exponentially distributed (Q16, Midterms)
- P(T>t)=e−λt
- P(T≤t)=1−e−λt
- Mean difference: λ1
- Memoryless: P{T>s+t∣T>s}=P{T>t} (W2 Slide 7)
- Given s time has elapsed, P(T>t) is the same as P(T>t+s)
- Proof: Tut2 Q2
- Time elapsed doesn't affect probability
- Merging: Expected minimum waiting time of multiple events/flows
- E[min(T1,T2)]
- P(min(T1,T2)>t)=P(T1>t∧T2>t)=e−λ1te−λ2t=e−(λ1+λ2)t
- Mean: λ1+λ21
- Comparison: Probability that exponential RV X > exponential RV Y (i.e. P(X > Y)):
- λx+λyλy
- Poisson Distribution: Number of times event occurs in an interval of time Pr(X=k)=k!λke−λ
- λ: rate (of arrival)
- k: number of occurences
- M/M/1:
- Arrival Rate λ=μ−E[W]1
- Server Processing Rate μ=λ+E[W]1
- Utilization ρ=μλ
- % of time stable state server is busy (1 item in svr)
- Probability that server is busy (1 item in svr)
- Derivation of E[L]=λE[Ws], where E[Ws]=μ1
- Exact number of customers πi=P(L=i)=ρi(1−ρ)
- % of time that there is i things in queue + server
- Probability that there is i things in queue + server
- E[# things in queue + server]: E[L]=λE[Ws]=1−ρρ
- E[# things in queue]: E[Q]=E[L]−ρ=1−ρρ2
- E[time in queue + server]: E[W]=μ−λ1
- E[time in server]: E[Ws]=μ1
- E[time in queue]: E[D]=E[W]−μ1
- Server stability requirement: λ<μ
- Burke's Theorem: Outgoing λ = Incoming λ
- Jackson Network
- 1) Define each arrival rate in terms of each other
- 2) Solve all rates, starting with the rate that relies on constants first
- 3) Calculate what you need to calculate
Resource Management
- Max-Min Fairness: Maximize smallest flow
- Resource R is Bottleneck for flow I iff both:
- resource R is at 100% usage
- flow I has largest amount of resource R
- Water filling algo: Calculating theoretical max-min fairness for all resources
- Given a resource and multiple flows for that resource:
- Allocation to flow = capacity x flowweight/∑∀flowweight
- When calculating final flow:
- 1) Calculate all flows for each resource without carried over blockages.
- 2) For each flow, set it to the minimum of its resources.
- 3) If step 2 frees up a resource, allocate that resource to other flows.
- 4) Repeat step 2 and 3 until all flows cannot increase anymore.
- 5) Evaluate bottlenecks for resources: compare by largest normalized allocation (final flow amt) / (flow weight)
- Generalized Process Sharing (GPS): Calc max-min fairness for 1 router
- Router can process many packets at the same time
- Same as water filling algo but on a time basis. At any time unit:
- 1 packet: 100% resources to process packet
- M packets: Weighted allocation to all M packets (larger weight = more compute priority)
- Backlogged Flow: Has data in queue
- GPS formalized (W3 Slide 18): Data processed is proportional to weight
- Weight Definition ∅1:∅2=a:b: Larger weight = more compute priority
- Weighted Fair Queueing (WFQ): Calc max-min fairness for 1 router (more realistic) (W3 Slide 23)
- Packet-based, but doesn't mean you cannot split (See Tut5 Q1 last part, 1st arrival departs last lol)
- Work-Conserving: Don't plan for future arrivals
- Concept: Ensure packets finish as close as possible to GPS
- Weight Definition a∅1=∅2=b: Larger weight = more compute priority
- Equate ∅2=b, ∅1=ba
- Virtual Time: # of time units completed so far if the system is running GPS
- Global variable, 1 V(t) variable for all flows
- Reset to 0 when all flows are empty
- Calculate Virtual Finishing/Departure Time Fpkt:
- Actual Arrival/Current time tpkt
- Virtual Arrival/Current time V(tpkt)
- Processing time Spkt
- Virtual Finishing time:
- if Fpkt≤V(tpkt):Fpkt=V(tpkt)+Spkt
- Intuition: Packet is overdue, process it soon
- else: Fpkt=Fprev+Spkt
- Intuition: Packet is not due yet
- Actual Finishing time: Just order the packets
- See W3 Slide 38
- init) V(t)=0,∀flows:Fprev=0
- 1) Track packets in flows and get all Active Flows (flows with active packets)
- 2) Slope = 1 / (sum all active flow weights)
- Treat Slope as V(t) to t conversion rate
- A real second elapses for every Slope virtual time
- 3) If event is arrival:
- Virtual processing time = [(packet size) / (full processing rate)] / (flow weight)
- 4) Fi = max(Fprev,V(tarr)) + (Virtual processing time)
- 5) Use Slope to increment V(t) to next arrival / departure time, then repeat step (1)
- Actual time elapsed = [nextV(t) - currV(t)] * 1/slope
SDN: Software Defined Networking
- Networking paradigm
- Without SDN (W4 Slide 5)
- High Hardware & Software Coupling (Vendor-specific, proprietary)
- Slow protocol standardization
- Hard to innovate
- Network admin is expensive and hard to debug
- Goal: Abstract functionalities of router
- +ve: Control plane can change independent of hardware
- Competitive technology development
- +ve: Control plane is now high-level, logically centralized
- Easier network management
- Router tasks:
- Data Plane: "Switch fabric"; Processing and delivery of packets based on router's rules and endpoints
- Control Plane: "Routing processor"; Determines how and where packets are forwarded
- Infrastructure (W4 Slide 27)
- Network Application: (Control Plane)
- Uses Northbound Interface API specified by
- Network Abstractions (Control Plane)
- Gets Global network view from
- SDN Controllers (Control Plane)
- Uses Southbound Interface API specified by
- Real Switches (Data Plane)
- Abstraction Layers:
- 1) Specification: Allow net app to express network behavior without having to implement it
- 2) Distribution: SDN apps make decisions from a logically centralized POV of the network
- 3) Forwarding: Implement any forwarding behavior, hiding hardware
- OpenFlow Protocol:
- Packet flow through OpenFlow Switch: 34-36. High level: 31
- Flow Table: Slide 36