How to Simulate Hierarchical Star Topology Projects Using NS2

To simulate a Hierarchical Star Topology using NS2, this encompasses to make a multi-tiered star topology in which smaller star networks are communicated by higher-level star networks. In this hierarchical structure:

  • Lower-level star networks have central nodes (switches/routers), which handle the traffic between their local nodes.
  • Upper-level star networks are communicated the central nodes of the lower-level star networks.

This topology is helpful for signifying the networks in which local devices are communicate with a close hub (star), and several such local hubs are associated to a higher-level hub.

Steps to Simulate Hierarchical Star Topology in NS2

Step 1: Understand Hierarchical Star Topology

In a Hierarchical Star Topology:

  • Lower-level star topologies are connecting the local devices to a central switch or router.
  • Upper-level star topology associates the central nodes of the lower-level stars to a higher-level central node (hub or core switch).
  • Traffic among the devices in diverse lower-level star topologies runs through the upper-level hub.

Step 2: Design the Network

We will be replicated a two-level Hierarchical Star Topology in which:

  • Three lower-level star topologies are made.
  • A higher-level star topology relates the central nodes of the three lower-level stars.
  • Each lower-level star has a central switch or router and numerous end devices are attached to it.
  • The central nodes of the lower-level stars are associated to a core switch within the upper-level star topology.

Step 3: Create an NS2 TCL Script for Simulating the Hierarchical Star Topology

Here is an NS2 TCL script, which replicates a Hierarchical Star Topology including three lower-level star topologies are associated via a core switch.

Example: Hierarchical Star Topology Simulation in NS2

# Create a new NS2 simulator object

set ns [new Simulator]

# Create the core switch for the upper-level star topology

set core_switch [$ns node]

# Create the central nodes for the lower-level star topologies

set star1_center [$ns node]   ;# Central node for Star 1

set star2_center [$ns node]   ;# Central node for Star 2

set star3_center [$ns node]   ;# Central node for Star 3

# Create end devices for Star 1

set star1_node1 [$ns node]    ;# End device 1 in Star 1

set star1_node2 [$ns node]    ;# End device 2 in Star 1

set star1_node3 [$ns node]    ;# End device 3 in Star 1

# Create end devices for Star 2

set star2_node1 [$ns node]    ;# End device 1 in Star 2

set star2_node2 [$ns node]    ;# End device 2 in Star 2

set star2_node3 [$ns node]    ;# End device 3 in Star 2

# Create end devices for Star 3

set star3_node1 [$ns node]    ;# End device 1 in Star 3

set star3_node2 [$ns node]    ;# End device 2 in Star 3

set star3_node3 [$ns node]    ;# End device 3 in Star 3

# Connect the lower-level star central nodes to the core switch (upper-level star)

$ns duplex-link $star1_center $core_switch 10Mb 20ms DropTail

$ns duplex-link $star2_center $core_switch 10Mb 20ms DropTail

$ns duplex-link $star3_center $core_switch 10Mb 20ms DropTail

# Connect the lower-level star devices to their respective central nodes

# Links for Star 1

$ns duplex-link $star1_center $star1_node1 1Mb 10ms DropTail

$ns duplex-link $star1_center $star1_node2 1Mb 10ms DropTail

$ns duplex-link $star1_center $star1_node3 1Mb 10ms DropTail

# Links for Star 2

$ns duplex-link $star2_center $star2_node1 1Mb 10ms DropTail

$ns duplex-link $star2_center $star2_node2 1Mb 10ms DropTail

$ns duplex-link $star2_center $star2_node3 1Mb 10ms DropTail

# Links for Star 3

$ns duplex-link $star3_center $star3_node1 1Mb 10ms DropTail

$ns duplex-link $star3_center $star3_node2 1Mb 10ms DropTail

$ns duplex-link $star3_center $star3_node3 1Mb 10ms DropTail

# Attach UDP agents to the end devices for communication

set udp1 [new Agent/UDP]

set udp2 [new Agent/UDP]

$ns attach-agent $star1_node1 $udp1

$ns attach-agent $star3_node1 $udp2

# Attach a Null agent to act as a sink at the opposite nodes

set null1 [new Agent/Null]

set null2 [new Agent/Null]

$ns attach-agent $star3_node1 $null1

$ns attach-agent $star1_node1 $null2

# Connect the UDP agents to the respective sinks

$ns connect $udp1 $null1

$ns connect $udp2 $null2

# Create CBR traffic from Star 1 to Star 3 and vice versa

set cbr1 [new Application/Traffic/CBR]

$cbr1 set packetSize_ 512

$cbr1 set interval_ 0.1

$cbr1 attach-agent $udp1

set cbr2 [new Application/Traffic/CBR]

$cbr2 set packetSize_ 512

$cbr2 set interval_ 0.1

$cbr2 attach-agent $udp2

# Start the traffic flows

$ns at 1.0 “$cbr1 start”

$ns at 1.5 “$cbr2 start”

# Create trace and nam files for recording the simulation events

set tracefile [open “hierarchical_star_topology.tr” w]

$ns trace-all $tracefile

set namfile [open “hierarchical_star_topology.nam” w]

$ns namtrace-all $namfile

 

# Define the finish procedure to close files and start NAM

proc finish {} {

global ns tracefile namfile

$ns flush-trace

close $tracefile

close $namfile

exec nam hierarchical_star_topology.nam &

exit 0

}

# Finish the simulation after 10 seconds

$ns at 10.0 “finish”

# Run the simulation

$ns run

Step 4: Explanation of the Script

  1. Network Setup:
    • A core switch associates the three lower-level star topologies within an upper-level star topology.
    • Each lower-level star topology includes a central node connected to three end devices (hosts).
    • The central nodes of the three lower-level star topologies are associated to the core switch.
  2. Communication Setup:
    • UDP agents are connected to the end devices in diverse lower-level star topologies to replicate the communication among distinct stars.
    • CBR (Constant Bit Rate) traffic is made among the devices in Star 1 and Star 3 via the core switch.
  3. Link Characteristics:
    • The links among the core switch and the lower-level central nodes are set up with higher bandwidth (10Mb) and longer delay (20ms) to replicate the backbone connections.
    • The links among the central nodes and the end devices must lower bandwidth (1Mb) and shorter delay (10ms) to mimic the local connections.
  4. Tracing and Visualization:
    • A trace file (hierarchical_star_topology.tr) records every network events, which encompassing the packet transmissions, receptions, and drops.
    • A NAM file (hierarchical_star_topology.nam) is made to envision the network topology and traffic flows.

Step 5: Run the Simulation

  1. We can be saved the script as hierarchical_star_topology.tcl.
  2. Execute the script in NS2:

ns hierarchical_star_topology.tcl

It will make two files:

  • hierarchical_star_topology.tr: A trace files, which records the packet-level information.
  • hierarchical_star_topology.nam: A NAM file for envisioning the network within NAM.

Step 6: Visualize the Simulation Using NAM

To envision the Hierarchical Star Topology in NAM:

nam hierarchical_star_topology.nam

In NAM, we will observe:

  • The core switch associating the three lower-level star topologies.
  • Each lower-level star has a central node in addition several end devices.
  • Packet transmissions among the devices in Star 1 and Star 3 via the core switch.

Step 7: Analyze the Trace File

The trace file (hierarchical_star_topology.tr) records every network events, like:

  • Packet transmissions and receptions among the nodes.
  • Packet drops or delays by reason of network congestion or link conditions.

We can utilize the tools such as AWK, Python, or custom scripts to examine the trace file and extract crucial parameters like:

  • Packet delivery ratio (PDR).
  • End-to-end delay among the nodes in diverse star topologies.
  • Network throughput over the core switch.

Step 8: Enhance the Simulation

The following is a few ways to expand or improve the simulation:

  1. Add More Lower-Level Star Topologies: Prolong the hierarchical star topology by inserting additional lower-level stars are associated to the core switch.
  2. Simulate Link Failures: Launch the link or node failures to monitor how the network performs when few links are down.
  3. Dynamic Traffic: Introduce diverse kinds of traffic (e.g., FTP, HTTP) among the nodes to replicate additional realistic scenarios.
  4. Performance Metrics: Investigate performance parameters like latency, throughput, and packet loss under diverse traffic loads or failure situations.

This process has covered the entire core concepts that are essential to understand the simulating and analysing the Hierarchical Star Topology projects using the NS2 simulation tool. We will ready to present the more record regarding to this topology based on your requirements.

We conduct simulations of Hierarchical Star Topology Projects using the NS2 tool, catering to scholars at all academic levels. For specialized support, visit phdprime.com, where our expertise is dedicated to helping students succeed.

Opening Time

9:00am

Lunch Time

12:30pm

Break Time

4:00pm

Closing Time

6:30pm

  • award1
  • award2