Moving To a New Site

I have just decided that it's time to move on and have my own domain. All the posts in this blog will be moved to:

Personal posts will be transferred to:

Monday, December 19, 2005

Turn the looks of your Windows XP into a Mac

WinOSX 2

WinOSX is an installer which can customize your whole system very easily : windows, controls, Finder, Dock, shadows, and many other features. If you don't like some features or the complete pack, you can uninstall some components or the full software safely.

So, if you are a Mac user who recently switched to Windows, or if you just think that Macs have a far better look than Windows, WinOSX is for you ! Change easiely the interface of Windows XP into a Mac OS X one !

Download the WinOSX 2 here.


FlyakiteOSX is a transformation pack. It will transform the look of an ordinary Windows XP+ system to resemble the look of Mac OS X. The installer simply automates the process of replacing critical system files, setting registry tweaks, and installing extras such as cursors, sounds, visual styles, etc. FlyakiteOSX DOES NOT contain any spyware or ad-ware of any kind. All files needed for FlyakiteOSX are stored in the Windows directory in a folder named 'FlyakiteOSX' that is hidden by default. All registry values for FlyakiteOSX are written to HKEY_LOCAL_MACHINE\Software\FlyakiteOSX.

Download FlyakiteOSX here.

Monday, December 12, 2005

MS Windows Vista is 20 Years Behind!

Just recently, Microsoft is bragging about its new feature that will be integrated in Windows Vista called the Restart Manager. The guys at Redmond think that this is a significant addition to the new version of Windows.. a big deal.

The Restart Manager is designed to update the operating system without having to do a manual or forced reboot of the machine. This feature is significantly important to servers having a lot of connected clients. With this feature, network administrators would not have to worry about telling the client computers to save your work ang log off the network because the damn server needs to reboot.

Desktop wise, a user may continue working on the machine while doing an automatic update or patch. After an update, the screen will go blank and on again with all your works still there where you left them.

So, the guys at Redmond really are right in saying that this is really an important feature of Windows Vista.. but as far as I know, this feature is already in Linux/Unix since the mid 80s. A feature enjoyed by Linux/Unix users for almost 20 years! And it is only now that Microsoft came to think that it is important!

I have an Ubuntu and SUSE machine running everyday, and I have made a lot of updates without an interruption of a single reboot. I love how this feature works in Linux machines. Will Microsoft catch up to Linux/Unix and make this feature a seamless integration to Vista? A technology perfected by Linux/Unix for almost 20 years.. its about time that Microsoft wake up.

Sunday, December 11, 2005

Make Firefox run up to 30 times faster

1. Type "about:config" into the address bar and hit return. Scroll
down and look for the following entries:


Normally the browser will make one request to a web page at a time.
When you enable pipelining it will make several at once, which really
speeds up page loading.

2. Alter the entries as follows:

Set "network.http.pipelining" to "true"

Set "network.http.proxy.pipelining" to "true"

Set "network.http.pipelining.maxrequests" to some number like 30 (mine is set to 100..hehehe). This means it will make 30 requests at once.

3. Lastly right-click anywhere and select New-> Integer.
Name it "nglayout.initialpaint.delay" and set its value to "0".
This value is the amount of time the browser waits before it acts on information it recieves.

If you're using a brodband connection you'll load pages 2-30 times faster now.

Orignally posted at: Halomods

Saturday, December 10, 2005

Ravi's Top 10 Linux Ubuntu Sites

I'm a Linux SUSE user, but somehow I also use Ubuntu a lot of times on my other machines... you may ask why I haven't migrate totally to SUSE? Well, Ubuntu is such a great distribution and I myself also recommends this distro to friends. I like SUSE because of its almost perfect integration to KDE, but when it comes to GNOME, Ubuntu has done a very good job, is very easy to use and stable.. I've been visiting several help sites for Ubuntu and below is a list originally collected at All About Linux Blog. I totally agree with Ravi that these sites are great Ubuntu help sites...

1) - This is the official site of Ubuntu Linux. All Ubuntu enthusiasts should make this the starting point of their journey towards embracing Ubuntu. On this site, you can get the latest official news related to this project, place orders for your free Ubuntu Cd's and search or browse for a particular package for your version of Ubuntu among other things.

2) - This is the official documentation site for Ubuntu Linux developed and maintained by the Ubuntu documentation project. This is the first place any Ubuntu user must go to get his problem solved. While you are here, do not forget to visit the FAQ Section on this site.

3) - This is a part of the official Ubuntu project and as you can see is a wiki. A wiki can be edited by anybody a good example of a wiki being the wikipedia project. On the Ubuntu wiki, you can get a wealth of information about configuring this distribution. First time visitors to this site may be interested in checking the Help Contents page. A section which is specially interesting is the Restricted Formats section which gives tips on getting support for proprietary file formats in Ubuntu.

4) - This is a high traffic web forum where you can post queries you have and get your doubts clarified. You need to first register to post in this forum. But just searching this forum will throw up a wealth of information a.k.a the issues that others faced and the solutions to these issues.

5) - This guide is not affiliated with the official Ubuntu project but claims to be an unofficial FAQ where you can find solutions to getting your problems solved in Ubuntu. This site takes a How-To approach in giving the answers. Though recently, this site has become a bit outdated as I ran into some problem while configuring Ubuntu Breezy according to their directions. Nevertheless a very good site.

6) - This is a site which hosts the documentation for all flavours of Ubuntu. This site is maintained by the guys themselves and contains information in a more structured format. One page which might interest the Ubuntu users while on this site could be the Hardware Compatibility Guide.

7) Ubuntu Blog - This is a very good blog maintained by a Ubuntu enthusiast and as the name of the blog indicates, exclusively caters to all things related to the Ubuntu distribution. Here you can get latest news, links to popular sites and experiences of the blog author in getting things done in Ubuntu Linux.

8) - This site claims to be an information hub for the Ubuntu community bringing together news, grassroots marketing, advocacy, team collaboration, and great original content. This is a site you can enter in your watch list if you are interested in knowing about the happenings in the Ubuntu world.

Automate all the Ubuntu housekeeping tasks
The next two sites list scripts which can be downloaded and used on your system to get support for a lot of proprietary features in Ubuntu without any (or very little ) user intervention.

9) Automatix - This is a script which can be used to get mp3, wmv, quick time, encrypted DVD support and more on Ubuntu and all this as the name indicates rather automatically. All the user has to do is run the script. Unfortunately, this project is not supported any longer but the users can still download the script and use it.

10) Easy Breezy - On this site you can get another script which helps in automating the tasks of getting proprietary file support in Ubuntu Breezy v5.10. The author claims it to be a safer alternative to another well known project.

Wednesday, December 07, 2005

How to Share Internet Connection in Ubuntu and Debian

Note: Type all the following commands in a root terminal, DO NOT use sudo.

1. Start by configuring the network card that interfaces to the other computers on you network:

# ifconfig ethX ip

where ethX is the network card and ip is your desired server ip address (Usually is used)

2. Then configure the NAT as follows:

# iptables -t nat -A POSTROUTING -o ethX -j MASQUERADE

where ethX is the network card that the Internet is coming from

# echo 1 > /proc/sys/net/ipv4/ip_forward

3. Install dnsmasq and ipmasq using apt-get:

# apt-get install dnsmasq ipmasq

4. Restart dnsmasq:

# /etc/init.d/dnsmasq restart

5. Reconfigure ipmasq to start after networking has been started:

# dpkg-reconfigure ipmasq

6. Repeat steps 1 and 2.

7. Reboot. (Optional)

Tuesday, December 06, 2005

Scary truth about Google: And then there were four

Google is one of about four search engines that matter. There are many more than four engines, but only about four have the technology to crawl much of the web on a regular basis. As of July 2003, Yahoo owned Overture, Alltheweb, AltaVista, and Inktomi, and finally dumped Google in February 2004. Everything needed to turn Yahoo into a major search engine was now under Yahoo's roof.

It is still possible that Yahoo will shoot themselves in the foot with all of this firepower -- their desire to monetize everything appears to be high on their agenda. But so far, after only a year, Yahoo has shown that their main index search results are on a par with Google's. This is true despite the fact that Yahoo has has infiltrated some pay-per-click links into the main index. One reason for Yahoo's success is that Google's main index, though free from paid results, has declined considerably since early 2003. Amazingly, there is on average only a 20 percent overlap between Yahoo's first 100 results and Google's first 100 results for the same search -- and still, Yahoo is just as good as Google. These days there is so little room at the top of the search results heap, that any combination of algorithms will produce acceptable results. The main difference now is in the depth of the crawl.

Microsoft recently developed their own engine because they found themselves squeezed between the advertising engine of Overture and the search engine Inktomi -- both of which became Yahoo property. In 2003 Microsoft began experimenting with their own crawler. Their new engine was launched in early 2005. If Microsoft puts their greed on a back burner for a few years, by doing deep crawls and presenting a clean interface, they could do to Google what they did to Netscape. There is no "secret sauce" at Google -- we now believe it was all hype from the very beginning. (To the extent that there ever was a secret sauce, the recipe is now known by countless ecommerce spammers, which makes it a liability rather than an asset.) Thousands of engineers in hundreds of companies know how to design search engines. The only real questions are whether you can commit the resources for a deep, consistent crawl of the web, and how aggressively you want to use your search engine to make money.

That gives us Google, Yahoo, and Microsoft. The last one worth watching is Teoma/AskJeeves. Their search technology is good, and they seem serious about expanding their crawl. It remains to be seen how deeply and consistently they will be able to crawl websites with thousands of pages.

Google is easily top dog. They provide about 75 percent of the external referrals for most websites. There is no point in putting up a website apart from Google. It's do or die with Google. If we're all very lucky, one of the other three will soon offer some serious competition. If we're not lucky, we will be uploading our websites to Google's servers by then, much like the bloggers do at (which was bought by Google in 2003). It would mean the end of the web as we know it.

It is worthwhile to understand the pressures that the average, independent webmaster is under. And given that Google is so dominant, it's important to understand the pressures that are being brought to bear on Google, Inc. It does not take too much imagination to recognize that there's a struggle going on for the soul of the web, and the focal point of this struggle is Google itself.

At one level, it's a struggle for advertising revenue. The pundits look at only this level, and are unanimous that the only advertising model on the web with any sort of future is one where little ads appear after being triggered by keyword searches, or by the non-ad content of a web page. For example, a search for Google Watch may show some ads on the right side of the screen for wrist watches. While the technique doesn't work for this example, often it serves its purpose. There is only so much pixeled real estate that the average user can be expected to survey for a given search. Today up to half of each screen is dedicated to paid ads on Google, as compared to the ad-free original Google. Everyone wants a piece of this new wave in web advertising, and Google is making a lot of money.

Unfortunately, early evidence suggests that Yahoo is less interested in pure search algorithms, than in acquiring market share in a pay-for-placement and/or pay-for-inclusion revenue stream. The same may be true for Microsoft. Even Google, dazzled by the sudden income from advertising, must be wondering why they go to all that trouble and expense to crawl the noncommercial sector. Those public-sector sites, such as the org, edu and gov domains, do not provide direct income, even though the web would be unattractive without them. All the excitement over a revived online ad market, pushed by pundits hoping for another dot-com gold rush, is beginning to look like the days when AltaVista decided that portals were the Next Big Thing. That notion caused AltaVista to lose interest in improving their crawling and searching -- which is how Google succeeded in the first place.

There has been almost no interest in establishing search engines that specialize in public-sector websites. Where is the Library of Congress? Where are the millions of dollars doled out by the Ford Foundation? How about the United Nations? Why can't some enlightened European entity pick up the slack? Everyone is asleep, while the Internet is getting spammed to death.

At another level, it's a struggle over who will have the predominant influence over the massive amounts of user data that Google collects. In the past, discussions about privacy issues and the web have been about consumer protection. That continues to be of interest, but since 9/11 there is a new threat to privacy -- the federal government. Google has not shown any inclination to declare for the rights of its users across the globe, as opposed to the rights of the spies in Washington who would love to have access to Google's user data.

Much of the struggle at this new level is unarticulated. For one thing, the spies in Washington don't talk about it. Congress has given them new powers, without debating the issues. Google, Inc. itself never comments about things that matter. The struggle recognized by Google Watch has to do with the clash of real forces, but right now all we can say is that potentially this struggle could manifest itself in Google's boardroom.

The privacy struggle, which includes both the old issue of consumer protection and this new issue of government surveillance, means that the question of how Google treats the data it collects from users becomes critical. Given that Google is so central to the web, whatever attitude it takes toward privacy has massive implications for the rest of the web in general, and for other search engines in particular.

Call it class warfare, if you like. Because that brings up the other major gripe that Google Watch has with Google. That's the PageRank problem -- the fact that Google's primary ranking algorithm has less to do with the quality of web pages, than it has to do with the "power popularity" of web pages. Their approach to ranking is anti-democratic, in that already-powerful pages are mathematically granted extra power to anoint other pages as powerful.

It's not that we believe Google is evil. What we believe is that Google, Inc. is at a fork in the road, and they have some big decisions to make. This Google Watch site is trying to articulate and publicize the situation at Google, and encourage more scrutiny of their operations. By doing this, we hope to play a small part in maintaining the web as an information tool that is more useful for the masses, than it is for the elites.

That's why we and over 500 others nominated Google for a Big Brother award in 2003. The nine points we raised in connection with this nomination necessarily focused on privacy issues:

1. Google's immortal cookie:
Google was the first search engine to use a cookie that expires in 2038. This was at a time when federal websites were prohibited from using persistent cookies altogether. Now it's years later, and immortal cookies are commonplace among search engines; Google set the standard because no one bothered to challenge them. This cookie places a unique ID number on your hard disk. Anytime you land on a Google page, you get a Google cookie if you don't already have one. If you have one, they read and record your unique ID number.

2. Google records everything they can:
For all searches they record the cookie ID, your Internet IP address, the time and date, your search terms, and your browser configuration. Increasingly, Google is customizing results based on your IP number. This is referred to in the industry as "IP delivery based on geolocation."

3. Google retains all data indefinitely:

Google has no data retention policies. There is evidence that they are able to easily access all the user information they collect and save.

4. Google won't say why they need this data:
Inquiries to Google about their privacy policies are ignored. When the New York Times (2002-11-28) asked Sergey Brin about whether Google ever gets subpoenaed for this information, he had no comment.

5. Google hires spooks:
Matt Cutts, a key Google engineer, used to work for the National Security Agency. Google wants to hire more people with security clearances, so that they can peddle their corporate assets to the spooks in Washington.

6. Google's toolbar is spyware:
With the advanced features enabled, Google's free toolbar for Explorer phones home with every page you surf, and yes, it reads your cookie too. Their privacy policy confesses this, but that's only because Alexa lost a class-action lawsuit when their toolbar did the same thing, and their privacy policy failed to explain this. Worse yet, Google's toolbar updates to new versions quietly, and without asking. This means that if you have the toolbar installed, Google essentially has complete access to your hard disk every time you connect to Google (which is many times a day). Most software vendors, and even Microsoft, ask if you'd like an updated version. But not Google. Any software that updates automatically presents a massive security risk.

7. Google's cache copy is illegal:
Judging from Ninth Circuit precedent on the application of U.S. copyright laws to the Internet, Google's cache copy appears to be illegal. The only way a webmaster can avoid having his site cached on Google is to put a "noarchive" meta in the header of every page on his site. Surfers like the cache, but webmasters don't. Many webmasters have deleted questionable material from their sites, only to discover later that the problem pages live merrily on in Google's cache. The cache copy should be "opt-in" for webmasters, not "opt-out."

8. Google is not your friend:
By now Google enjoys a 75 percent monopoly for all external referrals to most websites. Webmasters cannot avoid seeking Google's approval these days, assuming they want to increase traffic to their site. If they try to take advantage of some of the known weaknesses in Google's semi-secret algorithms, they may find themselves penalized by Google, and their traffic disappears. There are no detailed, published standards issued by Google, and there is no appeal process for penalized sites. Google is completely unaccountable. Most of the time Google doesn't even answer email from webmasters.

9. Google is a privacy time bomb:
With 200 million searches per day, most from outside the U.S., Google amounts to a privacy disaster waiting to happen. Those newly-commissioned data-mining bureaucrats in Washington can only dream about the sort of slick efficiency that Google has already achieved.

Originally posted at:

Thursday, December 01, 2005

Share Windows XP Printer to Linux Machines

All my machines in my network are already running Linux, Ubuntu 5.10 and OpenSUSE 10.0 are my distros of choice. Only one machine is running Windows XP because of custom Visual Basic Programs, and this Windows machine also serves as our Print Server. In this mini-How-To, I'll discuss how to share a Printer in Windows XP to Linux machines particularly Ubuntu 5.10 and OpenSUSE 10.0. We will use the graphical interface of Ubuntu (GNOME) and SUSE (KDE) to set up the printer.. no command lines.. promise.. =)

The Windows XP Machine (The Print Server)

  1. Install your printer.
  2. Open your Control Panel, then open your "Add or Remove Programs"
  3. Click on "Add/Remove Windows Components" located on the left side of the dialog box
  4. Put a check on "Other Netwrok File and Print Services" then click "Details" make sure that the "Print Services for Unix" is selected. (you may need your XP CD and a reboot)
  5. Open "Printer and Faxes" then right click on your printer then "Share", give your printer a share name that is short and with no special characters or spaces (ex. "HP940c" quotations not inclucded). This Share Name would later be used as the Print Queue on your Linux machine.
  6. The Windows Firewall may block Linux machines from printing, turn off your firewall for the meantime.. we will turn it back on later.
  7. Make sure that your Windows' IP number is Static.
The Ubuntu 5.10 Machine (Print Client)
  1. From the panel (default is on top) open System>Administration>Printing
  2. Click on "New Printer"
  3. Select "Network Printer" then use "Unix Printing LPD"
  4. Type on the "Host" Field the IP number of your Windows Print Server (ex.
  5. Type on the "Queue" field the Share Name of the printer, in our example we used "HP940c" (quotations not included)
  6. Click "Forward" then select the manufacturer and model of your printer from the list. Then click "Apply".
  7. Your new printer will now appear in the dialog box. Print a test page to make sure that the paper settings and print out mode are in the right settings..
  8. That's all... =)
The SUSE 10.0 Machine (Print Client)
  1. Open the "YaST Control Center" (you need root access)
  2. Select "Hardware" from the choices on the left
  3. Under Hardware, click "Printer"
  4. Under Printer, click "Add", if it asks you for a new Queue, just click "No" (this will happen if you have an exisiting installed printer)
  5. Under Printer Type, select "Print Directly to a Netwrok Printer" then click next.
  6. on the next dialog, choose "Remote LPD Queue" then click next.
  7. Type on the "Hostname of Print Server" Field the IP number of your Windows Print Server (ex.
  8. Type on the "Remote Queue Name" field the Share Name of the printer, in our example we used "HP940c" (quotations not included) then click next.
  9. For "Queue Name and Spooler Settings" put the "Queue Name" or Share Name of the printer that you used under "Name for Printing"... you may leave the other fields blank. Then click next.
  10. Now choose the proper Manufacturer and Model of your printer. click next.
  11. Accept all the changes, then you may try to print a test page..
  12. That's all... =)

Back to the Firewall

At this point, you should have already succesfully installed your printer and have printed a test page. Now, we will turn back on the Firewall of your Windows XP machine (this is a necessity because Windows has very poor security).

  1. Turn back on your personal firewall in your Windows XP.
  2. Now, print a test page from your Linux machine... the printing would fail because it was blocked by the firewall.. but that's ok.. will fix that..
  3. Locate your firewall log file by opening the Firewall Settings, then under Advance tab click on the Settings of Security Logging and there you will find the location and filename of your firewall log file..
  4. Open the firewall log file using Notepad and look for the Port number to which the printing was blocked.. this port number is between 500 to 700.
  5. After determining the port number, go back to the Firewall Settings and under the Exceptions tab, click "Add Port".
  6. Now, just enter the port number which was blobked previously by the firewall then select "TCP", give it a nice name like "Netwrok Printing".. and you're all set..
  7. Try again to print again a test page, you should have no more trouble printing..
Enjoy... =)

Copyright Notice:

Copyright (C) 2005 Gerald Cortez
Verbatim copying and distribution of this article is permitted in any medium, provided this copyright notice is preserved.