[ Up to the Home Page of CS-534 ]
© copyright University of Crete, Greece.
Dept. of Computer Science, University of Crete.
CS-534: Packet Switch Architecture
Project Topics (Wormhole IP over ATM)
1. Hardware Design of Buffered Routing Filter
(1 or 2 persons).
Design a wormhole IP router and ATM VP/VC translator,
of the filter style (one input, one output),
with cell buffering.
Design the hardware, at the block diagram and RTL level,
and carefully measure its complexity;
estimate the approximate cost of parts.
Assume OC-12 (622 Mbit/s) line speed,
and Altera FPGA and Synchronous DRAM (SDRAM) technology.
Extrapolate on how the hardware might evolve
for OC-48 (2.5 Gbit/s) line speed.
Also examine whether a bidirectional filter
should be implemented as two independent unidirectional filters,
or as one unit with some hardware blocks
shared between the two directions.
The filter should include:
-
an IP Routing Table, organized as in
"Routing Lookups in Hardware at Memory Access Speeds",
by Gupta, Lin, and McKeown,
in Proc. of IEEE Infocom, April 1998 (see
McKeown papers);
-
a large VP/VC Translation Table
for both native ATM and wormhole IP traffic;
-
hardware-managed free lists of VC's;
-
hardware-managed cell buffer;
-
control processor interface;
-
hardware FSM's for routing table and translation table management
(by the control processor),
and for wormhole IP over ATM cell processing.
2. Forwarding Policy versus Number of VC's
versus Buffer Size versus Delay
(1 or 2 persons).
Consider a wormhole-IP-over-ATM routing filter with cell buffering,
as described above.
Propose and study various policies
for when to buffer and when to forward an incoming cell.
Estimate their performance
using trace-driven simulation.
Prescribe and/or measure the number of VC's per VP used,
the buffer space used, and the delay introduced.
Estimate their hardware implementation complexity.
Also consider the special case of a single outgoing VC
(packet re-assembly for exit into native IP).
A sub-topic in this project is how to come up with
realistic input traces.
Normally, such traces should be generated from a simulator
of an ATM subnetwork,
subject to wormhole IP and native ATM traffic.
However,
this may be computationally expensive,
and it may be hard to find IP traffic traces
for such multi-port subnetworks.
Also,
existing traces may be from slower networks,
and may need appropriate scaling of their timestamps
in order to "satisfactorily chalenge"
the fast hardware that we are designing.
3. Effects of Dropped Cells
(1 person).
Consider a wormhole-IP-over-ATM routing filter.
The ATM subnetwork through which the incoming traffic has passed
may drop cells.
Since that subnetwork only knows about ATM cells
and is unaware of the IP-over-ATM structure,
the cells dropped may include head and/or tail cells of IP packets.
Such missing head or tail cells
will confuse the wormhole-IP-over-ATM routing filter.
Study the implications of such events.
Perform various simulations in order to quantify your resuts.
4. QoS for Native ATM and for IP Traffic
(1 or 2 persons).
Consider an ATM subnetwork
subject to mixed wormhole-IP and native-ATM traffic.
What service class will the VP's that carry IP traffic belong to?
What service class will the native-ATM traffic be assigned to?
Can there be QoS guarantees for both?
Is the answer as simple as high-priority-ATM and low-priority-IP,
or should we consider something more involved?
When VP's and VC's that carry IP traffic
merge from different incoming links to an outgoing link,
with what weight should each be serviced?
Can we solve the "parking lot" scheduling problem
for e.g. TCP connections running above wormhole IP over ATM
with weighted fair queueing?
Is it useful to combine
the buffer/forward policy of the "routing filters"
with the scheduling algorithms of the ATM switches?
How?
Should the number of VC's used per VP (for IP traffic)
be somehow linked to the relative number of flows
or to the relative weight for weighted fair queueing?
Is "lane hogging" ever a problem,
as it is in traditional multiprocessor wormhole (with backpressure)?
How iscell dropping related with all the above?
Remember that TCP flow control works with dropped packets?
What is the simplest thing that an ATM switch might do
when it has to drop a cell belonging to an IP-traffic VP?
Should it be aware of the specific VC to which the cell belonged,
or can it do something independent of VC number?
(e.g. mark the next cell of that VP in a particular special way?)
Consider the following scheduling policy for ATM switches
that use per-connection queueing:
After forwarding a cell from an IP-type queue (connection),
give priority to this queue over all other IP-type queues
until an end-of-packet cell is forwarded from that queue.
This policy minimizes the "spreading" of each IP packet in time
and the interleaving of IP packets with each other.
A claimed advantage of this policy (e.g. Myricom)
is that it reduces message delivery time under some circumstances
(BTW, do "messages" consist of single or multiple IP packets?);
on the other hand,
it causes small packets to suffer long delays
when they happen to fall behind long packets.
In our case,
such a policy would probably reduce the number of VC's per VP
that a bufferless routing filter would need.
How important is that?
Does it have any visible effect anywhere?
Another method to reduce the number of VC's per VP
is to use a buffered routing filter,
as discussed in project topic 2.
Which of the tw methods gives "better" results for less cost?
[ Up to the Home Page of CS-534 ]
© copyright University of Crete, Greece.
Last updated: March 1998, by M. Katevenis.