Computers blogs
flyg till köpenhamn

MCC 2011 Awardee

MCC 2011 Awardee
MCC 2011 Awardee

Recommended Books and Devices

Saturday, December 25, 2010


on the first day of christmas,computing gave to me, a root node in a B-Tree

On the second day of christmas, computing gave to me,
2 SATA drives
and a root node in a B-tree.

On the third day of christmas, computing gave to me,
3 French programmers
2 SATA drives
and a root node in a B-tree

On the forth day of christmas, computing gave to me,
4 primary keys
3 French programmers
2 SATA drives
and a root node in a B-tree

On the fifth day of christmas, computing gave to me,
5 constructors
4 primary keys
3 French programmers
2 SATA drives
and a root node in a B-tree

On the sixth day of christmas, computing gave to me,
6 codes compiling
5 constructors
4 primary keys
3 French programmers
2 SATA drives
and a root node in a B-tree

On the seventh day of christmas, computing gave to me,
7 hackers attacking
6 codes compiling
5 constructors
4 primary keys
3 French programmers
2 SATA drives
and a root node in a B-tree

On the eighth day of christmas, computing gave to me,
8 drives formatting
7 hackers attacking
6 codes compiling
5 constructors
4 primary keys
3 French programmers
2 SATA drives
and a root node in a B-tree

On the ninth day of christmas, computing gave to me,
9 backups restoring
8 drives formatting
7 hackers attacking
6 codes compiling
5 constructors
4 primary keys
3 French programmers
2 SATA drives
and a root node in a B-tree

On the tenth day of christmas, computing gave to me,
10 files uploading
9 backups restoring
8 drives formatting
7 hackers attacking
6 codes compiling
5 constructors
4 primary keys
3 French programmers
2 SATA drives
and a root node in a B-tree

On the eleventh day of christmas, computing gave to me,
11 crackers cracking
10 files uploading
9 backups restoring
8 drives formatting
7 hackers attacking
6 codes compiling
5 constructors
4 primary keys
3 French programmers
2 SATA drives
and a root node in a B-tree

On the twelfth day of christmas, computing gave to me,
12 calling functions
11 crackers cracking
10 files uploading
9 backups restoring
8 drives formatting
7 hackers attacking
6 codes compiling
5 constructors
4 primary keys
3 French programmers
2 SATA drives
and a root node in a B-tree

Sunday, November 28, 2010


Recent proofs have been uncovered that shows that Sometime in April, a government owned telecoms company(China Telecoms)  for 18 minutes successfully routed 15% of the world's internet traffic via their routers and servers. These included communications to and from Government agencies(especially U.S), internet shopping traffic,  ...
The alleged "theft" of internet traffic was discovered by a communications company outside Washington, D.C. where computer network engineers monitor Internet traffic.

The internet operates on a trust-based system both in infrastuctural architechture and in terms of usage.Electronic routers direct the traffic flow, insuring the shortest path between any two computers anywhere in the world that hope to exchange information.
A little illustration here;lets say i have 3 uncles in the federal  ministry and i want to send a letter directly to the President of Nigeria.i would mentally calculate which of my 3 uncles is closest to the President by the number of people between them and the President,then i "forward" my letter to that uncle.That uncle in turn also calculates his distance to the president by the ranks of those he knows and forwards my letter to the next person he thinks is closer to the president....this continues until my letter is safely in the hands of the president.
Thats how routers "forward" information on the internet, only this happens in microseconds.

so essentially, the 18MIN hijack happened when computer routers in China belonging to China Telecom began signaling to other computer routers on the Internet that they could provide the quickest path between different computers. 
For 18 minutes, the traffic on 35,000 to 50,000 computer networks elsewhere in the world began flowing toward China, before getting routed to their final destinations. China Telecom had created a massive detour.

 Rodney Joffe, Senior Vice-president and Senior technologist at Neustar Inc said "They, all of a sudden, began announcing the fact that they were an optimal path to about 15 percent of the destinations on the Internet, that, in fact, they were a way to get to a large number of destinations on the Internet, when, in fact, they were not. We have never seen that before on this scale ever."

 The mere fact that the incident didn't severe all communications during the time it lasted suggests a calculated attempt to intercept, capure and later examine/inspect information.

Security expert Dmitri Alperovitch—VP of threat research at McAfee—says that this happens "accidentally" a few times a year, but this time it was different: The China Telecom network absorbed all the data and returned it without any significant delay. Before, this kind of accident would have resulted in communication problems, which lead experts to believe this wasn't an accident but a deliberated attempt to capture as much data as possible.


A lot can be captured in 18 minutes. When all the communications from tens of thousand of computer networks was routed to China, that included all the Web traffic, e-mail, and instant messages to and from -- that's the Department of Defense -- and -- those are U.S. governments departments. The U.S. Senate and NASA also had all their traffic diverted.

Companies like Dell, Yahoo!, Microsoft and IBM had their data diverted by China Telecom, too. On that day in April, officers logging into a Pentagon Web site ended up looking at an image that came to their screen via China.

Information could have been gathered, which after much examination could be used to craft a virus to be released in such huge networks.The fact that traffic could be intentionally diverted to where it is "MALICIOUSLY WANTED" opens the eyes to possibilities such as the data actually being altered(man-in-the-middle attack) before being forwarded to its destination,fabricated rogue mails could be concocted to seem as being  sent from someone/somewhere....usernames could be masqueraded... the possibilities are just limitless!!


Private networks and networks that provide essential services ;from life- essential services such as power grids,water, traffic, product mixing networks.. to those trusted services  such as internal government mail,military, Organizational services should be hosted on servers distant from the internet.worst case scenario;if these servers have to have a connection to the internet whatsoever, DMZ's (De-militarized zones) should be used as a sort of buffer.

Network administrators and security experts have to harden their organizations' networks.
 Wider proliferation of use of network security software, such as Microsoft's Forefront Threat management gateway 

The most efficient solution i see is for stakeholders and computer professionals around the world to;
 Greater Policize the internet trust system or all-together just fashion another network of networks that is just not as trust-based as the present standards.It might be  a longtime coming, but maybe we just go back to the base,how it all began, a revolution might just be underway.   


Tuesday, November 9, 2010

Choosing a Datacenter compute model

Compute models refer to infrastructures with which the IT department or datacenter chooses to render ,deliver, or deploy a particular service.
 Certain services/applications may be right candidates for central management or central distribution while others are just better managed locally,also existent in modern computing  is a hybrid of the two. Needless to say the various subtypes of these models.


In this model,the client is merely a display and input device. All computation is
done centrally on the server, and all data is stored in a data center.
Nothing is executed or persistent on the client. Usually, Remote
Display Protocol (RDP) or Independent Computing Architecture*
(ICA*) is used to push an image of the server-based application
to a terminal viewer on the client.

As with Terminal Services, all computation and storage are centralized,
with application images pushed over the network to the client
via Remote Display Protocol (RDP) or other display protocols. The
major difference is that VDI can offer each user their own complete
virtual machine and customized desktop, including the OS, applications,
and settings.

Much like server  blades,Blade PCs repartition the PC, leaving basic display, keyboard, and
mouse functions on the client, and putting the processor, chipset,
and graphics silicon on a small card (blade) mounted in a rack on a
central unit. OS, application, and data storage are centralized in a
storage array.
Unlike server blades, PC blades are built from standard desktop or
mobile processors and chipsets. The central unit, which supports many
individual blades, is secured in a data center or other IT-controlled
space. In some cases, remote display and I/O is handled by dedicated,
proprietary connections rather than using RDP over the data network.

4.OS Image Streaming or Remote OS Boot
At startup, the client is essentially “bare metal,” with no OS Image
installed locally. The OS Image is streamed to the client over the
network, where it executes locally using the client’s own CPU and
graphics. Application data is stored in a data center. The client is
usually a PC with no hard drive, which uses RAM exclusively

The client OS is locally installed, but applications are streamed
on demand from the server to the client, where they are
executed locally.
Although the terms “streaming” and “application virtualization” are
often used interchangeably, they are not the same thing. Streaming
refers to the delivery model of sending the software over the
network for execution on the client. Streamed software can be
installed in the client OS locally or, in most cases, it is virtualized.

 The factors listed below should be quantified per model, making trade-offs where necessary and contrasted to determine the optimal model for every unique service need.

Disaster recovery
Infrastructure cost
User customization
Remote network access
Remote access

Reference: Principled technologies white paper titled "Understanding alternative compute models" 

Saturday, October 9, 2010

Time for a NAP

As a kid, i used to be cajoled to take naps(or Siesta,as my mum called it) in the afternoons.Now as a System engineer/administrator i still need NAP, for my networks and the computers in my networks to meet certain compliance requirements for a healthy network.Did i hear u ask how?

 NAP, as i know it now, stands for Network Access Protection.NAP is a new set of operating system components in Windows Server 2008, Windows Vista, and Windows XP Service Pack 3 that provides a platform for system health validated access to private networks. The NAP platform provides an integrated way of validating the health state of a network client that is attempting to connect to or communicate on a network and limiting the access of the network client until the health policy requirements have been met. 

To control access to network resources,based on requesting computers health status,the following functionalities need to be put in place:

·         Health state validation  Determines whether the computers are compliant with health policy requirements.
·         Network access limitation  Limits access for noncompliant computers.
·         Automatic remediation  Provides necessary updates to allow a noncompliant computer to become compliant without user intervention.
·         Ongoing compliance  Automatically updates compliant computers so that they adhere to ongoing changes in health policy requirements.
Windows Server 2008, Windows Vista, and Windows XP Service Pack 3 provide the following NAP enforcement methods:
·         Internet Protocol security (IPsec) enforcement for IPsec-protected communications
·         802.1X enforcement for IEEE 802.1X-authenticated connections
·         Virtual Private Network (VPN) enforcement for remote access VPN connections
·         Dynamic Host Configuration Protocol (DHCP) enforcement for DHCP-based address configuration
·         Terminal Server (TS) Gateway connections.

Network access protection client-server architecture is depicted in the image below:
·         NAP clients  Computers that support the NAP platform for system health-validated network access or communication.
·         NAP enforcement points(VPN servers,DHCP servers,Network access devices)  Computers or network access devices that provide access to a resource and that use NAP or can be used with NAP to require the evaluation of a NAP client’s health state and provide restricted network access or communication. NAP enforcement points use a Network Policy Server (NPS) that is acting as a NAP health policy server to evaluate the health state of NAP clients, whether network access or communication is allowed, and the set of remediation actions that a noncompliant NAP client must perform. Examples of NAP enforcement points are the following:
·         Health Registration Authority (HRA)  A computer running Windows Server 2008 and Internet Information Services (IIS) that obtains health certificates from a certification authority (CA) for compliant computers.

·         NAP health policy servers  Computers running Windows Server 2008 and the NPS service that store health requirement policies and provide health state validation for NAP. NPS is the replacement for the Internet Authentication Service (IAS), the Remote Authentication Dial-In User Service (RADIUS) server and proxy provided with Windows Server 2003. NPS can also act as an authentication, authorization, and accounting (AAA) server for network access. When acting as a AAA server or NAP health policy server, NPS is typically run on a separate server for centralized configuration of network access and health requirement policies, as Figure 1 shows. The NPS service is also run on Windows Server 2008-based NAP enforcement points that do not have a built-in RADIUS client, such as an HRA or DHCP server. However, in these configurations, the NPS service is acting as a RADIUS proxy to exchange RADIUS messages with a NAP health policy server.
·         Health requirement servers  Computers that provide current system health state for NAP health policy servers. For example, a health requirement server for an antivirus program tracks the latest version of the antivirus signature file.
·         Active Directory® Domain Service  The Windows directory service that stores account credentials and properties and Group Policy settings. Although not required for health state validation, Active Directory is required for IPsec-protected communications, 802.1X-authenticated connections, and remote access VPN connections.
·         Restricted network (some people may choose to call this a DMZ network(Demilitarized zone) A separate logical or physical network that contains:
·         Remediation servers  Computers that contain health update resources that NAP clients can access to remediate their noncompliant state. Examples include antivirus signature distribution servers and software update servers.
·         NAP clients with limited access  Computers that are placed on the restricted network when they do not comply with health requirement policies.


The NAP client uses the appropriate security or authentication protocol (SSL(Secure socket layer),PEAP(Protected extensible authentication protocol),(Extensible authentication protocol)EAP,(Point-to-point) protocol )PPP...) depending on the resource the client is trying to access to craete protected session to send its current system health state to the HRA and request a health certificate. The HRA also uses the appropriate protocol  to send remediation instructions (if the NAP client is noncompliant) or a health certificate to the NAP client(if compliant).

While the NAP client has unlimited access to the intranet, it accesses the remediation server to ensure that it remains compliant. For example, the NAP client periodically checks an antivirus server to ensure that it has the latest antivirus signature file or a software update server, such as Windows Update Services, to ensure that it has the latest operating system updates.
If the NAP client has limited access, it can communicate with the remediation server to become compliant, based on instructions from the NAP health policy server. For example, if during the health validation process the NAP health policy server determined that the NAP client does not have the most current antivirus signature file, the NAP health policy server instructs the NAP client to update its local signature file with the latest file that is stored on a specified antivirus server.
The HRA sends RADIUS(Remote authentication dial-in user service) messages to the NAP health policy server that contain the NAP client's system health state. 
The NAP health policy server sends RADIUS messages to:
·         Indicate that the NAP client has unlimited access because it is compliant. Based on this response, the HRA obtains a health certificate and sends it to the NAP client.
·         Indicate that the NAP client has limited access until it performs a set of remediation functions. Based on this response, the HRA does not issue a health certificate to the NAP client.
Because the HRA in Windows Server 2008 does not have a built-in RADIUS client, it uses the NPS service as a RADIUS proxy to exchange RADIUS messages with the NAP health policy server.

When performing network access validation for a NAP client, the NAP health policy server might have to contact a health requirement server to obtain information about the current requirements for system health. For example, the NAP health policy server might have to contact an antivirus server to check for the version of the latest signature file or to contact a software update server to obtain the date of the last set of operating system updates.

So whether you'r instructing your kids or doing your million Naira/dollar job of instructing your network,taking a NAP really does pay-off.  


Friday, September 3, 2010

...Now Lets talk Licensing

When was the last time you bought a harddiskdrive(HDD)? or even a memory module,usb disk , even a whole just payed cash and left the store.Its that simple when buying a piece of hardware. Whether you'r paying cash or sing a debit card, the point is that the entire value of that hardware is represented in that liquid value.Thats hardware for you.
Software on the other hand is intellectual may purchase the disk/cd which contains a representation of that intellectual value but you can never really "PAY" for the intellectual value of the software, hence the purchase of LICENCES to install and use that software.A license bestows limited rights to use the software, but it also imposes restrictions and threatens serious penalties when license violations get a software,you purchase licenses according to the mode provided by the owner/proprietor.
But here's the thing, with the pace at which hardware has evolved and still keeps evolving,with the advent of virtualized application distribution,dual,quad cores processor, licensing seems to be a very dicey issue these days with the IT department, potentially leading to a myriad of legal issues in case of license violations(consequently leading to millions of Naira/Dollars to settle law suits) and Overspending in cases where licences purchased are not fully utilized.
Initiallly, software vendors sell license their software on a per CPU basis assuming that every PC has a single CPU.Then came the dual cores,and quad cores, then there was the argument as to if a computer having 4 processors could still be considered as a single PC, this extended to if the IC holding the processors would be considered as the CPU or the individual Processors( An argument whose conclusion is not far fetched, especially when A+ gurus are around)
Then came virtualization, then a CPU or group of CPU's could be shared by users spread out across a building, and in latter times spread across cities.Thus leading to situations where a company purchases 100 per CPU licenses, virtualizes the environment and ends up sharing 100 CPU's among 1000 users, technically breaching the license agreement.
But albeit, software vendors have included per user licensing which reduces the problem on their part but increases the need for better planning on the part of the purchasing IT department.
But software users are also seeing a spate of new license options or editions that accept varying levels of use—especially in virtual environments. For example, the Windows Server 2008 Enterprise license allows four virtualWindows Server instances
for free, but the four instances must run on the same host. By comparison,Windows Server 2008 Datacenter Edition is licensed per CPU and per host server but is
independent of the number of VMs running on the host. It may be considerably more cost-effective to deploy the Datacenter Edition rather than the Enterprise or Standard Editions.

These Licensing concerns and confusions can only be solved if the data center admins and the software proprietors work together to ensure a that the Software developer gets full reward for use of His intellectual property and also that the Company utilizing His solution optimizes its usage such as to maximize efficiency whilst minimizing cost.

Sunday, August 22, 2010


As an IT specialist, Choosing the right Server hardware that would service your clients,with high availability, fault tolerance,scalability, recovery...and all that good stuff is one of the hardest decisions you have to make. Most times for Us its a matter of trading off between choosing what's good for the company, what allows you the opportunity to research and learn new technologies and also choosing something that wouldn't be too expensive whilst ensuring good ROI(Return On Investment).
Server in their broadest of categories, like houses differ in shapes and sizes, a few notable ones are:

Small, floor-standing towers or rack-mounted 1U and 2U servers.
 Medium-sized, floor-standing towers or larger rack-mounted servers.
 Blade centers and blade servers.
 Large floor-standing servers, includingmainframes.
 Specialized fault-tolerant, rugged and embedded processing or real-time

 Virtual servers or virtualmachines (VMs) running on physical servers.
Cloud servers (essentially a VMservice offering).

Functionally, servers could be further classified as follows:

•Mission critical
• Business cannot function without
• Time sensitive
• Highly available
• Low RTO & RPO
•Must be secure
• Time ismoney
• Downtime is a lost opportunity

• Business essential
• Some impact to business
• Good availability
• Low to medium RTO and RPO
• Some downtime can be tolerated

• Business important
• Little impact to business
• Some delay OK
• Basic availability
•Medium RTO/RPO
• Downtime is tolerated

• Business optional
•Minimal disruption
• Delay tolerable
• Some availability
• High RTO/RPO

These means of classification still barely scratch the surface of the store of servers available in the IT market.Physical server manufacturers such as Apple Inc., Cisco Systems Inc. (Unified Computing Systems), Dell Inc., EMC (Vblock), Fujitsu, NEC Corp., Hewlett-Packard Co., IBM, Oracle Corp./Sun, Silicon  Graphics International Corp.and SuperMicro Computer Inc. are represented by a mix of direct sales, direct touch markets and technical support, as well as channel value-added resellers (VARs) and solution providers tha tbundle their applications with different hardware offerings.So we can see that today's IT infrastructure department have their jobs picked out for them whether they are designing a new server infrastructure from scratch or integrating/consolidating a new server into an existing infrastucture.

three steps are key in making the decision as to what physical server  to buy:

what kind of services are to be hosted on the server?what are the virtualization /cloud needs? what are the storage needs? what are the availability needs to meet service level agreements(SLAs)? what are the existing Failover/clustering solutions(if any..)?amount of physical space available in the data center? what are your forecasts for growth requirements?

consider the server categories and tiers talked about earlier.whether it be blade servers or just plain old rack servers or about processors  if you'r buying a single socket,single core,single threaded processor then you know that that computer will at best execute one instruction in a cycle,single socket,dual core-2 instructions,dual socket,quad core-8 instructions then you start to wrap your head around 32-bit and 64-bit about memory,Main memory or RAM, also known as dynamic RAM(DRAM) chips, is
packaged in different ways, with a common form being dual inline memory modules. DRAMmemory access
speed is referred to in terms of older DDR2 (667MHz) or newer DDR3 (1333MHz). RAMmain memory on a server is the fastest form of memory, second only to internal processor or chip-based registers—L1, L2 or local memory. In general, more memory is better; however, the speed of the memory is also very important.

Take a look at what functionality is built into the server or provided on server blades for general-purpose
networking along with attachment of disk storage. What is there in terms of 10 Gb Ethernet (10 GbE), and how many ports as well as 3G (3Gb) or 6G (6Gb) Serial-Attached SCSI (SAS) for disk storage attachment
(internal or external), along with serial, video and USB ports? Also look at expansion capabilities for additional mezzanine cards for blade servers, or PCI-E cards for networking, storage and other peripherals.
PCI SIGMulti-Root IO Virtualization (MR-IOV), a relatively new and emerging feature for servers, enables
advanced connectivity, including adapter sharing.MR-IOV will enable multiple, physically separate adjacent
servers to share a PCI-E adapter card, allowing the virtualization of servers that otherwise could not be
consolidated.MR-IOV can also boost scaling capabilities beyond normal physical limits in high-density servers by placing adapter cards in shared external expansion slots.

simple, install the softwares that would optimize your server hardware.(e.g don't plan to install 32-bit software on your 64-bit machine)
Know your requirements,know your wants, stay within budget whilst ensuring high ROI. Have fun Buying!

Wednesday, August 11, 2010


Ever entered into an examination hall ill? you have all the right answers to the questions,you want to elaborate, illustrate, exemplify, ...  but you just dont have the energy. Thats analogous to having the right software,  that has passed every available benchmark, but is being run on  poorly managed hardware.the following tips would keep your hardware happy with your software:

1. Keep  a dust-free PC environment. dust particles that get into the system unit(what non computer proffessionals would call C.P.U) totally mess up the entire heating-cooling-heating-cooling .. cycle of the system unit's design.Dust is also unfriendly and clogs up most of our input/output devices.In addition to keeping the environment dust free, you should vaccum-clean(BLOW-OUT) your system unit say once a year.

2. Computer Hardware are brought to life by electrical power. They are therefore rated to function withing certain electrical supply limits.Make sure you check Power ratings of purchased hardware before connecting them to AC power supply especially when the device was shipped in from another country.This would save you some time, money(MORE IMPORTANTLY), and quite frankly, save the environment some oxides of carbon.

3. Install proper device drivers for your hardware. Device drivers are what translates the instructions you issue via your application software or operating system to a language the device can understand.using an improper device driver is like visiting China to deliver a speech and taking an Italian speaking translator along to help you translate  your speech to your Chinese audience.

4. Keep a close eye on your hardware vendor for release of  firmware and BIOS updates.firmware updates to hardware are what ervice packs are to the Operating system.

5. Be Vigilant. Notice changes in behaviour of your hardware such as humming noises, over-heating, changes in tone,brightness or contrast(in the case of display hardware).Alert a proper engineer when noticed (especially when these behaviour are accompanied by loss of performance or quality of computer output.

After all said and done, your computer harware is still driven by software. Follow the tips in my 12th of july article, combine thpse with these 5 and your computer usage would be stressfree not to mention predictable.  

Monday, July 12, 2010


I usually tell my friends that "computers dont just crash". more than likely, if anything goes wrong trust me ITS YOU!! either directly or in some remote way. I got two calls this week to fix one dell inspiron desktop(a very old model!!) and one LG laptop(cant remember the model). what they had in common was that their users' were doing things they shouldn't be doing. so i decide to let you guys know 10 things i think would improve youre general computer usage experience.Personallly, i believe software is always on top of hardware so i'm starting with software management tips. follow me:

1.Remember that the most important software on your computer is the Operating System.

2. Install genuine windows/provider Operating system.

3. As a benefit of  (2.) above, make sure you install any operating system patches in form of service packs or fixes.

4. Make sure you have a personal inventory of every application software  you install/uninstall.(the operating syatem itself keeps track of this information in event viewer(start-control panel-administrative tools-event viewer)) its only fair that you have yours.

5. Make sure you only install software from TRUSTED sources.

6. Make sure you always check the company website of important software that you use for sofware updates(this is essential because, your software provider works day and night to find and close-up loop-holes in their software, the closing up is in form of updates or newer enhancements)

7. Install a good anti virus software( i wont go into the argument of which is the best?!?!?!).

8. BE PATIENT. When users click a button and its taking more time than usual to respond,i notice that  users always respond to this by clicking more and more. i want to let you know that the OS now has to respond to the first click,and your zillion subsequent clicks.

9.If you notice any unusual computer behaviour just after u install a software. do a system restore(start-all programs-accessories-system tools- system restore) this restores your computer to its state before certain installations. simply uninstalling the software might not fully remove all the traces of the software(to make sure this works well you can use windows clean-up tool. download this at

10. After all said and done, the software is meant to run over hardware. Make sure your installed hardaware meet the pre-requisite for any software you are installling.Not to mention that your softwares should be interoperable(see 9. above)


Keep pressing F8

Saturday, July 3, 2010

Virtualization;the Ace Technology

I know u guys were expecting the sequel to my june 24 post about cloud computing. So as not to be guilty of putting the cart before the horse,i would lets first look at a veritable tool for driving efficient cloud computing, 'SERVER VIRTUALIZATION'.

Today’s datacenter is a complex ecosystem with different kinds of servers,operating systems, and applications interacting with a wide variety of desktop computers and mobile client computers. For IT departments, managing and supporting this assortment of mission-critical technologies is a challenge.

Deploying server virtualization technology—moving disparate servers to virtual machines (Virtual
machines) in a centrally managed environment—is an increasingly popular option for facing this challenge.

Virtualization reduces IT costs, increases hardware utilization, optimizes business and network infrastructure, and improves server availability.
Windows Server® 2008 includes Hyper-V (formerly codenamed viridian), a powerful virtualization technology that enables businesses to take advantage of the benefits of virtualization without having to buy third-party software.

The most widely leveraged benefit of virtualization technology is server consolidation, enabling one server to take on the workloads of multiple servers. For example, by consolidating a branch office’s print server, FAX server, Exchange server, and Web server on a single Windows Server, businesses reduce the costs of hardware, maintenance, and staffing.

Microsoft’s virtualization strategy includes five key components:

• Server virtualization, enabling multiple servers to run on the same physical server

• Presentation virtualization, enabling remote users to access their office desktops or serverbased applications

• Desktop virtualization, enabling desktop operating systems to be consolidated into the datacenter

• Application virtualization, helping to prevent conflicts between applications on the same PC

• Comprehensive management, tying virtual components into the same management tools used to monitor and control physical components

Server Virtualization

Microsoft has two server virtualization offerings: Hyper-V in Windows Server 2008, and Virtual Server 2005 R2. Hyper-V extends virtualization capability to manage 32-bit Virtual machines alongside 64-bit Virtual machines, enable Virtual machines to access larger amounts of memory, and enable Virtual machines to leverage multiple processors. Virtualization is a key feature of the operating system and helps customers get complete isolation of the different virtual machines and still benefit from server consolidation.

Desktop Virtualization

When server virtualization is used host client OSes for remote access, this approach is often called desktop virtualization. While the principles of desktop virtualization are similar to server virtualization, this approach can be useful in a variety of situations. One of the most common is to deal with incompatibility between applications and desktop operating systems. For example, suppose a user
running Windows Vista needs to use an application that runs only on Windows XP with Service Pack 2.
By creating a VM that runs this older operating system, then installing the application in that VM, this problem can be solved. Microsoft VirtualPC is an example of a solution in this space to help address th scenario for hosting VMs in a desktop environment for application compatibility.

Application Virtualization

Application virtualization helps isolate the application running environment from the operating system install requirements by creating application-specific copies of all shared resources and helps reduce application to application incompatibility and testing needs. With Microsoft SoftGrid, desktop and network users can also reduce application installation time and eliminate potential conflicts between
applications by giving each application a virtual environment that’s not quite as extensive as an entire virtual machine. By providing an abstracted view of key parts of the system, application virtualization reduces the time and expense required to deploy and update applications.

Presentation Virtualization

Presentation virtualization is a technology that enables applications to execute on a remote server, yet display its user interface locally. Microsoft’s presentation virtualization technology, Microsoft Terminal Services, enables remote users to connect to their office desktops from anywhere in the world, taking full advantage of applications, resources, and familiar interfaces even from computers with different operating systems or system capabilities. Administrators can access system management tools from remote locations, for example, or applications can be run on a server and accessed by remote users.
Presentation virtualization enables customers to centralize and secure data, reduce cost of managing applications, reduce test costs for compatibility between the OS and applications, and potentially improve the performance of systems overall.

Comprehensive Management in a Familiar Environment

Virtualization technologies provide a range of benefits. Yet as an organization’s computing environment gets more virtualized, it also gets more abstract. Increasing abstraction can increase complexity, making it harder for IT staff to control their world. The corollary is clear: If a virtualized world isn’t managed well, its benefits can be elusive.To a large degree, the specifics of managing a virtualized world are the same as those of managing a

physical world, and so the same tools can be used. To this end, Windows Server virtualization and the Microsoft System Center family of products includes many management features designed to make managing virtual machines simple and familiar while enabling easy access to powerful VM-specific management functions.
culled from: hyper-v product overview.
                   by microsoft corporation,October 2008

Thursday, June 24, 2010

Windows Azure and the "Cloud Computing" story(Part1)

I first heard the phrase "cloud computing" on one of my visits to my friend and mentor, Yinka (i visit him from time to time and we discuss information technology generally,we also argue on certain issues. Yinka works with a top notch IT solutions provider in Nigeria). at that time even He didnt have a clear-cut idea of what "cloud computing" was all about.

If you'v had your IT ear to the ground in the last 2 years, you would have heard the phrase "cloud computing" used in  place/collaboration with SAAS(Software-as-a-service), PAAS(Platform-as-a-service), IAAS(Infrastructure-as-a-service), and more recently, Windows Azure.
The concept of cloud computing is this;consider a company  XYZ( irrespective of size),XYZ can have their applications, databases, security/certificate servers located OUTSIDE XYZ complex/group of complexs.

let me paint a clearer picture, if a user in XYZ wants to use a spreadsheet /word processing software, he can simply connect to XYZ's "cloud", use the application, save changes back to the "cloud". now the cloud here represents an internet-facing  datacenter managed by IT solutions providers that specialise in rendering such a service. 

As for the origin of the term “cloud computing”, there are a few possibilities…

In May 1997, NetCentric tried to trademark the “cloud computing” but later abandoned it in April 1999. Patent serial number 75291765.

In April 2001, the New York Times ran an article by John Markoff about Dave Winer’s negative reaction to Microsoft’s then new .Net services platform called Hailstorm (if you want a laugh sometime, ask a Microsoft Azure person about Hailstorm). It used the phrase “‘cloud’ of computers”.

In August 2006, where Eric Schmidt of Google described there approach to SaaS as cloud computing at a search engine conference. I think this was the first high profile usage of the term, where not just “cloud” but “cloud computing” was used to refer to SaaS and since it was in the context Google, the term picked up the PaaS/IaaS connotations associated with the Google way of managing data centers and infrastructure.

Much like “Web 2.0″, cloud computing was a collection of related concepts that people recognized, but didn’t really have a good descriptor for, a definition in search of a term, you could say. When Schmidt Google used it in 2006 to describe their own stuff and then Amazon included the word “cloud” in EC2 when it was launched a few weeks later (August 24), the term became mainstream. People couldn’t definite it exactly, but they roughly knew it meant SaaS apps and infrastructure like Google was doing and S3/EC2 services like Amazon was offering.

On my next post we'l explore the relationships between cloud computing,SAAS,PAAS,IAAS,windows Azure.watchout for  WINDOWS AZURE and the CLOUD COMPUTING STORY(PART2).

Tuesday, June 22, 2010

LINUX PUPPY is really a puppy!!

Ever heard of the product linux puppy? its really a lifesaver.i'v had friends, relatives, co-workers, .. get into the "handstied" dilemma. now let me describe the "handstied" dilemma: "your system refuses to boot, occasionally it boots up completely but seems to freeze,you try a system restore(like my friend Kay always would), still d same thing, u cant get hold of your operating system CD/DVD,or maybe you even have your OS CD/DVD but a repair(in the case of windows XP) doesn't solve the problem,you'v had it up to "here" with trying, you just want a way to get your important files out and do a clean format/install. Now thats the "handstied" dilemma.LINUX PUPPY to the rescue..
Now heres the procedure for getting and using linux puppy:

A.) Please take the opportunity now to bring your back up/archive/copy of all your important stuff 100% up to date, check that it's accurate, reproducible and held securely on removable media (not your hard drive)......

the idea here is to ensure that all your important data, files, spreadsheets, work, music, emails, log in details, user names, address book, drivers, letters, invoices, videos fact everything that is important to you (stuff that you would not like to permanently lose) is kept safe on external media. I recommend the use of a USB external hard drive for this.

Whilst you are doing this, collect together all your application installation discs (or downloaded installation executables) serial numbers, product licence keys (including the one for Windows itself and the Microsoft Windows installation disc and or Manufacturer's Restore and Driver/utilities discs.

If you have made a "disc image" using Paragon, Acronis, Ghost or one of the free disc imaging alternatives then keep this image safe also on removable media, verify it for correctness and reproducibility and ensure you have made a bootable CD/DVD for use in the event that Windows will not start.

Now here's the detailed instructions on how to download Puppy and burn it as an image.

B.) Download Linux Puppy from here:

Its a download of only about 100MB, so a broadband connection is advisable for the download... Puppy is an alternative operating system..... it does not need Windows to run, and the licence to use it is free of charge and comes included automatically. It is different in its interface from Windows, but sufficiently intuitive for most folks to use it successfully without problem.

C.) Burn the iso (that you have downloaded) to CD ROM as an "image" must be as an image, a copy will not work!........Imgburn is good for this, available from here:

NOTE by following this procedure you will have created a "self contained" operating system" (Linux Puppy and a bootable CD ROM all in one!

D.) Switch on your "original" computer and Set Bios to Boot from CD as first priority...on most Dells it's F2 to enter the Bios (set up) otherwise try F12, Esc or Delete.On some older Compaqs and HP's try F1, for Acers try F1 or F2 or Ctrl +Alt+Esc (the first screen on boot up usually gives the keys to press to enter "Set Up"..BIOS)

NOTE you will need to press the appropriate key (to enter the Bios) immediately after power on.

Your hard drive should be next in priority followed by USB/network if the settings provide these options. Remember to save your altered settings in the Bios before exiting

E.) With your external back up media plugged in to a USB port I.E. External USB hard drive, flash drive etc and the Linux Puppy CD ROM inserted into the CD drive, close down your machine and:

F.) Boot from the CD, you may get a message to "press any key to continue booting from CD" so allow this, Now follow the on-screen Linux Puppy prompts (remember don't install it) let it run in memory, it will be quite happy doing so, and will allow you access to your Hard drive so you can copy off to your external media anything you need. You will soon get used to Puppy and you'll find that you will be able to use the Internet and do most things that you want to do. However as you need to back up your files, I suggest you concentrate on copying off your important stuff first.If you are not experienced with using Linux then an easy way to copy files is to open windows of both the C drive from which you want to copy stuff and also your flash/USB drive, and then just use "drag and drop"/"copy and paste. Puppy will provide full support for USB drives and you should soon find you get the hang of things/i]

Wheww!! now that was easy aint it? so now you are ready to for a clean install.after installation copy your files back in place. Good luck with PUPPY.

Tuesday, June 15, 2010

Please press F8

i'm sure u guys are wondering,what on earth do i mean by my the title of my first post "please prss f8"(except for some windows tech people). anyways, its so named to welcome you my blog and all what i stand for. for those of you who dont know, F8 is a windows operating system hotkey that you pres during bootp to jumpstart an OS troubleshooting procedure.

There you go, this blog is setup to talk about issues in the operating system technologies, server technologies, standards, newstuff, job placements, advertisements. (in danger of having put the carriage before the horse), you are free to submit your  computer related issues/problems here and we(i mean i and great minds in information technology from around the world) will be all too glad to help.

occasionaly, i would post some interesting stuff about my day,week or month, but never too far from the subject of Information technology).

so before you call your consultant, engineer, ... remember to press F8.