Datacenter Diagram School Project

trev19

Commendable
Nov 6, 2016
13
0
1,510
I have been assigned the role of network manager for a school project. The group I am in has been tasked with designing a network infrastructure including policies and configurations. I will be creating a network diagram and am curious if what I have is good or terrible. I've researched other network diagrams and have somewhat of an understanding of what it should be like. Its a rough draft and I would appreciate feedback.

In the diagram there are three identical sites: L.A, Orlando and Newark. Each site should have: Web, FTP, File, Email, DNS, and Print servers. The sites must have 99.999% uptime. I've attempted to the best of my knowledge to implement redundancy everywhere that I could. There is no limit to the amount of money that this can cost.

Here's a link to the diagram in google docs: https://docs.google.com/document/d/e/2PACX-1vRmOpDNY-U34XY_NkqTaN7w-KUJCKi_sN1EKViTd9qrXSCGuO2rU_-zLrjn58SA5PcmVUQpIFNrqCPi/pub
 
Solution
Workstations, generally a single network connection. They are not usually "mission critical". A user can move to another one. Single connectivity with them distributed to multiple switches would be appropriate. I think you should label the devices connected to the web servers as load balancers.
You probably DON'T want a flat address space. Workstations should be a unique subnet controlled by your Domain server (DHCP host). I would recommend that you add a pair of network devices labeled as load balancers in front of your multiple app servers.
To meet your five 9s uptime, you would also have at least two of all of your server types with some kind of cluster or fail-over between them.
Showing the ISP only connected to LA is probably not accurate. A large organization would have ISP1, ISP2 for either load balancing or failover.
Each office would have the same. Your inter-site links would probably have a note that they are virtual via public connectivity.

 
Not sure what the correct answer to this is any more. Used to be you had data centers at each office. This is seldom the case anymore. A data center requires very special things like power generators and specialized cooling. Building designed for office space do not generally have this and it is much less expensive. It is not cost effective to put data centers in office buildings anymore.

So most times you have the offices with the end devices and not much else. Makes it simpler to scale because 2 date centers could support any number of offices.

Then to make this even more complex vendors like microsoft have starting selling their office product as a service rather than selling the software. So you just use microsofts outlook servers in their data centers. This is also true for hosting services also. Most times you just outsource the data center function to companies like akaimai and they handle all the actual server reduancey and backup etc. The end customer just does the unique design and does not worry about the how the data center and servers really work.

From a educational perspective I can see assuming that you run data centers but the real world everything is outsourced. The design would be office connected to some "cloud" and the rest of the function residing on outsource vendors infrastructure that is a black box
 
I am not sure what benefit you think you are getting from the four cross connected switches on the DMZ side of your diagram. Unless one set of those is not switches. The bottom side one of the switches has no devices connected. I think you are over engineering.
 
I was going for some sort of redundancy, but I'm not really an expert at all this. We're supposed to have 99.999% uptime. So I was really going for it. You're right about over engineering. I've cut the number of switches in the DMZ to two with redundant connections. Do you think that adding a second NIC for each of the workstations would be overkill? That was supposed to be the point of the second switch in the bottom section that wasn't connected to anything.
 
Workstations, generally a single network connection. They are not usually "mission critical". A user can move to another one. Single connectivity with them distributed to multiple switches would be appropriate. I think you should label the devices connected to the web servers as load balancers.
You probably DON'T want a flat address space. Workstations should be a unique subnet controlled by your Domain server (DHCP host). I would recommend that you add a pair of network devices labeled as load balancers in front of your multiple app servers.
To meet your five 9s uptime, you would also have at least two of all of your server types with some kind of cluster or fail-over between them.
 
Solution