By Uwe Meding
How to build a cloud computing development environment using commercial off-the-shelf hardware and open-source software. This is a small but powerful setup for cloud-computing “simulations” and exploring different system configurations. It provides an environment for the majority of the enterprise software development and deployment:
- web server + several web services
- distributed application servers
- database server
- shared file infrastructure
- small Hadoop cluster
Design
The two critical aspects we need to keep simple are the hardware, and the networking setup. Either one can be improved later on or in deployment, without affecting the core purpose of the data center. The actual choice of a motherboard and other hardware is not that critical, but should be in the “desktop” or “server” class. One of the keys is to select components that are close to the mainstream. They tend to be more reliable, less fussy and generally have better support.
Data center overview |
All systems have two NICs so that we have two independent network paths. Use two independent network switches to aggregate each network. (If your motherboard only offers one NIC, simply add an additional network card)
The two networking paths are great for separating data access concerns and for performance.
For example:
- the database server can server data across the internal network whereas internet traffic is handled across the external network.
- the Hadoop cluster uses the internal network to run, and the external network to present the results.
Database systems
The database systems are built with the same hardware as the other servers, except they have three NICs.
Database system connections |
Two database systems are a sufficient (experimental) setup for most enterprise systems.
The easy way to create a network between the two systems is using a cross-over cable. That way we do not have to add an additional switch or burden one of the other switches with additional network traffic.
The separation of the traffic puts us is a great position to experiment with
- high-availability (HA) database setup
- redundant database/file server setups
- do fail-over tests
Software
- Web-server: Apache httpd, PHP etc.
- Application server: Java, Tomcat, JBoss etc
- Compute servers: Hadoop software
- Storage server: Database software (MySQL etc), file servers (NFS etc)
- the storage server offers the database connection on the internal network, and the file storage on the external network
- the application server uses the data base connectivity on the internal network, and serves the application aspect onto the external network.
- the compute server connect to the database and manage all the Hadoop-internal traffic on the internal network.
- the data traffic is “private” as far as the database systems are concerned.
- there is no need to burden one of the other switches with this traffic
- a good, short cable ensures that we are transporting data at the near maximum speed of the networking cards
- using the load balancing feature we can scale applications services across multiple systems
- using the fail over system to ensure a high availability of the servers and software
Front view |
- 1 web server
- 2 Java application servers
- 5 Hadoop compute servers
- 2 storage servers (database and files)
Shopping list
- server systems: Gigabyte motherboard, 8 GB RAM, 500GB hard drive, 1 additional NIC
- storage server: Gigabyte motherboard, 16 GB RAM, 4TB storage, 2 additonal NICs
- firewall system: Jetway motherboard with 2 NICs
- Ancillary items like cables, enclosures etc.
Leave a Reply
You must be logged in to post a comment.