Computers blogs
flyg till köpenhamn

MCC 2011 Awardee

MCC 2011 Awardee
MCC 2011 Awardee

Recommended Books and Devices

Sunday, November 28, 2010

CHINA'S 18 MIN INTERNET "HIJACK";HOW?,WHY? WHAT DOES IT MEAN

Recent proofs have been uncovered that shows that Sometime in April, a government owned telecoms company(China Telecoms)  for 18 minutes successfully routed 15% of the world's internet traffic via their routers and servers. These included communications to and from Government agencies(especially U.S), internet shopping traffic,  ...
The alleged "theft" of internet traffic was discovered by a communications company outside Washington, D.C. where computer network engineers monitor Internet traffic.

HOW DID IT HAPPEN
The internet operates on a trust-based system both in infrastuctural architechture and in terms of usage.Electronic routers direct the traffic flow, insuring the shortest path between any two computers anywhere in the world that hope to exchange information.
A little illustration here;lets say i have 3 uncles in the federal  ministry and i want to send a letter directly to the President of Nigeria.i would mentally calculate which of my 3 uncles is closest to the President by the number of people between them and the President,then i "forward" my letter to that uncle.That uncle in turn also calculates his distance to the president by the ranks of those he knows and forwards my letter to the next person he thinks is closer to the president....this continues until my letter is safely in the hands of the president.
Thats how routers "forward" information on the internet, only this happens in microseconds.

so essentially, the 18MIN hijack happened when computer routers in China belonging to China Telecom began signaling to other computer routers on the Internet that they could provide the quickest path between different computers. 
For 18 minutes, the traffic on 35,000 to 50,000 computer networks elsewhere in the world began flowing toward China, before getting routed to their final destinations. China Telecom had created a massive detour.

 Rodney Joffe, Senior Vice-president and Senior technologist at Neustar Inc said "They, all of a sudden, began announcing the fact that they were an optimal path to about 15 percent of the destinations on the Internet, that, in fact, they were a way to get to a large number of destinations on the Internet, when, in fact, they were not. We have never seen that before on this scale ever."

WHY DID IT HAPPEN
 The mere fact that the incident didn't severe all communications during the time it lasted suggests a calculated attempt to intercept, capure and later examine/inspect information.

Security expert Dmitri Alperovitch—VP of threat research at McAfee—says that this happens "accidentally" a few times a year, but this time it was different: The China Telecom network absorbed all the data and returned it without any significant delay. Before, this kind of accident would have resulted in communication problems, which lead experts to believe this wasn't an accident but a deliberated attempt to capture as much data as possible.

WHAT DOES THIS SAY 

A lot can be captured in 18 minutes. When all the communications from tens of thousand of computer networks was routed to China, that included all the Web traffic, e-mail, and instant messages to and from dot.mil -- that's the Department of Defense -- and dot.gov -- those are U.S. governments departments. The U.S. Senate and NASA also had all their traffic diverted.

Companies like Dell, Yahoo!, Microsoft and IBM had their data diverted by China Telecom, too. On that day in April, officers logging into a Pentagon Web site ended up looking at an image that came to their screen via China.


Information could have been gathered, which after much examination could be used to craft a virus to be released in such huge networks.The fact that traffic could be intentionally diverted to where it is "MALICIOUSLY WANTED" opens the eyes to possibilities such as the data actually being altered(man-in-the-middle attack) before being forwarded to its destination,fabricated rogue mails could be concocted to seem as being  sent from someone/somewhere....usernames could be masqueraded... the possibilities are just limitless!!

WHAT CAN BE DONE

Private networks and networks that provide essential services ;from life- essential services such as power grids,water, traffic, product mixing networks.. to those trusted services  such as internal government mail,military, Organizational services should be hosted on servers distant from the internet.worst case scenario;if these servers have to have a connection to the internet whatsoever, DMZ's (De-militarized zones) should be used as a sort of buffer.

Network administrators and security experts have to harden their organizations' networks.
 Wider proliferation of use of network security software, such as Microsoft's Forefront Threat management gateway 

The most efficient solution i see is for stakeholders and computer professionals around the world to;
 Greater Policize the internet trust system or all-together just fashion another network of networks that is just not as trust-based as the present standards.It might be  a longtime coming, but maybe we just go back to the base,how it all began, a revolution might just be underway.   


 

Tuesday, November 9, 2010

Choosing a Datacenter compute model

Compute models refer to infrastructures with which the IT department or datacenter chooses to render ,deliver, or deploy a particular service.
 Certain services/applications may be right candidates for central management or central distribution while others are just better managed locally,also existent in modern computing  is a hybrid of the two. Needless to say the various subtypes of these models.

MODERN COMPUTE MODELS

1.TERMINAL SERVERS
In this model,the client is merely a display and input device. All computation is
done centrally on the server, and all data is stored in a data center.
Nothing is executed or persistent on the client. Usually, Remote
Display Protocol (RDP) or Independent Computing Architecture*
(ICA*) is used to push an image of the server-based application
to a terminal viewer on the client.

2.VIRTUAL DESKTOP INFRASTRUCTURE
As with Terminal Services, all computation and storage are centralized,
with application images pushed over the network to the client
via Remote Display Protocol (RDP) or other display protocols. The
major difference is that VDI can offer each user their own complete
virtual machine and customized desktop, including the OS, applications,
and settings.

3.BLADE PCs
Much like server  blades,Blade PCs repartition the PC, leaving basic display, keyboard, and
mouse functions on the client, and putting the processor, chipset,
and graphics silicon on a small card (blade) mounted in a rack on a
central unit. OS, application, and data storage are centralized in a
storage array.
Unlike server blades, PC blades are built from standard desktop or
mobile processors and chipsets. The central unit, which supports many
individual blades, is secured in a data center or other IT-controlled
space. In some cases, remote display and I/O is handled by dedicated,
proprietary connections rather than using RDP over the data network.



4.OS Image Streaming or Remote OS Boot
At startup, the client is essentially “bare metal,” with no OS Image
installed locally. The OS Image is streamed to the client over the
network, where it executes locally using the client’s own CPU and
graphics. Application data is stored in a data center. The client is
usually a PC with no hard drive, which uses RAM exclusively

 5.APPLICATION VIRTUALIZATION
The client OS is locally installed, but applications are streamed
on demand from the server to the client, where they are
executed locally.
Although the terms “streaming” and “application virtualization” are
often used interchangeably, they are not the same thing. Streaming
refers to the delivery model of sending the software over the
network for execution on the client. Streamed software can be
installed in the client OS locally or, in most cases, it is virtualized.

FACTORS TO CONSIDER IN SELECTING THE RIGHT MODEL
 The factors listed below should be quantified per model, making trade-offs where necessary and contrasted to determine the optimal model for every unique service need.

Performance
Security
Manageability
Mobility
Disaster recovery
Infrastructure cost
User customization
Remote network access
Remote access

Reference: Principled technologies white paper titled "Understanding alternative compute models"