Welcome to my home on the Internet. This Website has been repurposed from an SEO service-offering Website into a simple Blog-type format where I plan to share some industry and personal insights, when time permits me to do so.

My interests are generally: business, search engines, technology, investing / stock trading, and more recently, travel. However, expect a healthy dose of randomness.

If you have an attractive opportunity or otherwise need to reach me, please see the contact page. And don't forget to connect with me on social networks.

Regards,
Darrin J. Ward

A Tool for Finding Co-Citation Links To List of Sites

March 20 2014, 4:02pm in SEO
0 Comments

I suffer programmers-block, which I define as an affliction whereby I will write my own code/software to solve problems, rather than use 3rd party software

Anyway, one such program I wrote was a PHP program that takes the link data input from a tool such as Majestic or open site explorer and analyse the data to find the really juicy links, including the "co-citation links" i.e. the pages/domains that are doing most of the co-linking to the top pages. These pages with a lot of co- and cross-citation links usually end up being pretty decent linking opportunities and good link neighborhoods.

The data the program spits out is pretty good (can also add weights to known domains or TLD's like .gov or .edu).

Soooooo... Tryig to figure out what to do with this software... I may provide the source-code online for free, make a commecial solution where you can upload the raw data and get the final results out, or something else. Let me know what you think I should do. Contact me via email, Facebook, Twitter, etc and let me know if you would like to have this tool available to you - or if you think it's derp.

Apple is Losing Their Luster - My Journey from Windows to Apple and Back Again

January 3 2014, 9:44pm in Technology
0 Comments

I started using Apple products full-time in 2003. My first product was a G3 MacBook laptop which I bought at the Apple store in Glendale Galleria mall in California. When I got home and took it out of the box, it didn't turn on. It just didn't work!

I remember thinking to myself "well, Apple is very different from Windows so perhaps I'm supposed to do something else besides just press the power button." But no! It was broken. I brought it back and they promptly exchanged the laptop with no questions. The replacement worked just fine.

For nearly 10 years I worked exclusively from OS X. It was a dream for developing because it married an elegant UI (User Interface) with the powerful BSD UNIX operating system under the hood. If you aren't aware, the majority of Web servers use some flavor of an *NIX OS, so developing code on an *NIX variant such as BSD makes deployment to a production environment a lot easier. Although these days this isn't as much of an issue, as I now carry a USB thumb-drive with a Virtual Machine replica of the production server so that I can run and test code on the VM from almost any machine when developing.

I swore that I would never use Windows again. But in 2012, after a year or so of back-and-forth between OS X and Windows, I finally went 100% back to Windows. The reason for the switch back is that, as I focused more on business, it became increasingly evident that there was absolutely no Mac alternative for the Microsoft Office applications on Windows. Microsoft's Entourage and Outlook on Mac are terrible, Word and Excel have dire performance, and the Apple suite of Office software is so dumbed down that it is basically unusable for any serious business work or analysis work.

I can recall using Boot Camp on my Mac to load Windows, to test the waters, so to speak. I loaded a few of Adobe's products onto Windows (Dreamweaver and Photoshop) and was blown away by how much faster they ran under Windows than under Mac. I tried some other programs that did the same or similar things to what I had been doing under OS X for so long, and almost everything was shockingly fast under Windows. That sealed the deal for me on moving back to Windows.

I have also owned iphones, ipads and various other Apple products. But no longer. The only Apple product I have is an old aluminum MacBook from late 2008. It's in my closet acting as a file server. Once it dies, I will very likely be Apple-Free. Somehow, Apple products have lost their magic to me. Some of that happened after the death of Steve Jobs, but in my opinion the seeds were planted well before then. For example, the Snow Leopard release was the first time with OS X where I felt I no longer loved the product.

Which brings me to my point: Apple have lost their luster, and it is showing! And I don't think I am the only one that thinks this way.

Look at this Google Trends chart of "Android" vs Apple. Android is clearly becoming more popular and Apple clearly peaked in 2012:

Bitcoin Mining - General Thoughts: It's Not Worth It

December 28 2013, 9:43pm in Random Stuff
0 Comments

Bitcoin. Bitcoin. Bitcoin. It's been everywhere lately. I first learned about Bitcoin (BTC) only about 6 months ago. So, very recently as compared to the age of Bitcoin, which has been around for a few years.

I did read up some on Bitcoin when I first heard about it. It didn't really appeal to me much then. It still doesn't. However, I do enjoy delving into things and learning about them, so I recently decided to do a little Bitcoin mining. Yeah, don't bother!

My understanding of Bitcoin mining is thus:

  1. There is an absolute hard-coded limit of 21,000,000 BTC (Bitcoins) that can ever be mined.
  2. Bitcoin "mining" occurs when analysis of all Bitcoin transactions are "verified". All Bitcoin transactions are public record and they can be analyzed in the "block chain". As they are verified, new Bitcoins are mined as a reward.

I have perhaps oversimplified, however the above are 2 key points. So back to my Bitcoin mining for a moment...

I thought that perhaps I might have enough computing power to mine at least a few bitcoins because I have about 7 different computers laying around doing nothing. Oh how mistaken I was. My most powerful computer -- which I use for programming and day-trading -- has an Intel i7-3770k (a 3.5 GHz quad-core, 8-thread processor), 2 X NVIDIA GeForce GTX 650 GPU Graphics Cards powering 3-4 displays [depending on what I'm doing]), and 16GB of RAM memory. So, I thought perhaps this desktop computer might be powerful enough to at least make some dent on Bitcoing mining.

But, as it turns out, each of my GPU cards are only producing 30 MHashes/s (according to the GUIminer app), which is basically insignificant. In the 2 hours or so that my GPUs have been mining, I have generated 0.00000237 BTC. At the current market value of $759.92 per Bitcoin, according to mtgox, I have mined $0.0018. That is less than one-fifth of a US cent. The electricity alone for powering the computer has surely cost me more!

But that's not what concerns me. What does concern me are the fundamentals of the Bitcoin currency. The entire integrity of the Bitcoin currency relies on the miners' ability to "verify" transactions, and the "rewards" for doing such are the newly minted/mined Bitcoins. If transactions are not verified, or if more than 50% of the entire computing power of the Bitcoin network is controlled by those with malicious intent, then the transactions could be fraudulent.

As the limit of 21,000,000 BTC approaches, it will become exponentially more expensive and difficult to mine for new Bitcoins, which will remove much of the incentive to mine and thus the computing power from the system. As this computing power is removed from the ecosystem, the susceptibility to Bitcoin fraud increases.

Of course, there are other factors that could bring the market system back into equilibrium, such as higher Bitcoin prices, reduced prices for computing power (Moore's Law), Bitcoin fractionation, etc. But something tells me that at some point these will no longer be enough because they also incentivize the malicious folk in the system.

Me? Even though I am happy to see alternatives to the ever-depreciating US Dollar and other fiat currencies, I am opting to stay away from Bitcoin entirely.

The Life of the Vanishing $1 - Taxes

December 26 2013, 5:06pm in Random Stuff
0 Comments

Taxation is a funny thing. In truth, all of the money in circulation will always find its way back to the government. It just happens to be temporarily in your pocket and/or bank account. But there has been this thought in my mind for quite a number of years that disturbs me.

Let's pretend that there is a very simple economic system where there is only $1 in existence and there is an income tax rate of 10%. Follow that $1 through its lifecycle. But instead of thinking about just the $1 as a piece of paper, you have to think about the value of the dollar.

OK, so imagine the following:

  • Person A pays $1 to Person B for some product or service.
  • 10% (10 cents) is paid as income tax; 90 cents remain.
  • Person B pays 90 cents to Person C for some product or service.
  • 10% (9 cents) of the transaction is paid as income tax; 81 cents remain.
  • Etc, etc. until there is almost nothing left of the $1.

    So, the government has collected essentially all of the $1 that was in existence to begin with. Now I ask you... what paid for all of the products and services? And how will future transactions be conducted considering all of the $1 is now gone?!

    Of course the government recirculates that $1 by spending their tax revenues and then it goes through the whole process again. So basically, the government gets stuff for free off the backs of the general public. And they get to do it over and over again with the same $1 through recirculation. There are words for this: repugnant, scandalous, leech.

  • Mobile-Phone Woes - AT&T and T-Mobile

    October 30 2013, 12:08pm in Random Stuff
    0 Comments

    On AT&T & T-Mobile: I love mobile phones. I strongly dislike mobile phone carriers.

    Firstly, I should say that I was a T-Mobile customer and avid supporter up until mid-2012. I had been a T-Mobile customer for about 10 years at that point, however the service at my home and in the general area degraded to the point that the phone essentially became unusable. After weeks of terrible call quality and dropped calls, I was forced to switch to AT&T.

    AT&T have fantastic coverage and call-quality. I was once again able to conduct business on my phone.

    Skip forward a year and I am starting to feel that the AT&T pricing is exorbitant. I can't get the bill to be under $140/month. Knowing that T-Mobile recently changed their tiers and pricing structure, I decided to bite the bullet and go back to trying T-Mobile. My thesis: That enough time had passed and surely T-Mobile would have stepped up their network quality; And, that although I would have to pay an Early Termination Fee (ETF) to AT&T in the amount of $225, I would be able to make this up over the course of a few months by way of the lower prices at T-Mobile.

    Terrible T-Mobile Service

    Well, I can imagine someone standing beside me at this point, saying "Thou hath shot thyself in thy foot".

    And how right they would be!

    The T-Mobile service in atrocious, at least in my area (Boca Raton, Florida). At my house in particular, phone calls are plagued with crackles and popping sounds. It cuts out a lot, and the end result is that I generally end up hearing more sound effects than real conversation and I often need to ask people to repeat themselves.

    Most annoying is that sometimes my call won't even go through or a call will drop mid-conversation and I then get to hear this beautiful feedback of me talking. It's literally me saying every echoed back with perfect quality a half-second to a second after I say it. I was mid-conversation the other day when suddenly instead of hearing the other caller I was hearing myself. So bizarre.

    In the wider general area the call quality is somewhat better (though I still get the echoing back problem). The data service however is still very poor. Rarely do I get 4G LTE, and when I do, it often promptly drops back to EDGE - not 4G non-LTE or 3G, but EDGE. Like... the technology that was used for data 10 years ago. It's funny... Sometimes I will pick up the phone, see "LTE" and be happy; then I try to browse the Web and it immediately drops to EDGE. I feel like the phone is lying to me. Bait and switch!

    With AT&T I never dropped back to EDGE. 3G, yes sometimes, but never EDGE. Data was always pretty solid. Not always the fastest, but certainly more consistently available.

    And I want to clarify that I was experiencing these issues with T-Mobile on 3 different devices, so it definitely was not a device issue.

    Move Back to AT&T

    Still being within my 60-day grace period for being able to move back to AT&T, I decided to call up AT&T to see what they can do for me to get back on their wagon. Well not much... There aren't any discounts or breaks they can give. But after listing to their rep and asking him questions for about 10 minutes, I couldn't help but think how convoluted the whole pricing structure is:

    • 4GB Plan with Unlimited Talk/Text: $70/month
    • Each Phone on Plan: $40/month

    Confused, I asked "Wait so basically it's like a big bucket of data and talk/text, but it can't be touched unless you pay the entry-fee of $40 each", to which he laughed and said "yes".

    Whereas I can understand this model from a business perspective, it just seems tacky and unintuitive. How about a set fee for the services plus one phone (I mean there always has to be at least one phone to access it anyway, right?). Then just have an additional fee for subsequent devices. But no... because that would mean you would end up seeing a price of $110 instead of $70 + $40. And psychologically, there is a difference!

    Hopefully someday we will have a reliable mobile carrier that has good coverage, isn't exorbitantly pricesd and doesn't make you feel like they are screwing you. In the meantime, I need to decide if I'm going back to AT&T.

    Dow Jones Correlations

    July 24 2013, 12:27am in Random Stuff
    0 Comments

    I was bored this weekend, so I decided to write a software program to correlate data over extended periods of time. What I wanted to do was find the strongest historical correlation periods to most recent stock market activity, over a number of defined periods e.g. 1, 2 ,3, 4, 5 and 7 years. In terms of years as regards to the stock market, I really mean 365 observations, which of course is only going to be about 250 observations per year given weekends, holidays, etc. So keep that in mind and look at the chart dating on each chart for true observation dating.

    The charts below are the DJIA (Dow Jones Industrial Average) closest matched to their most highly correlated periods in the past. These are not adjusted in any way, they are simply the historical periods with the highest correlations to most recent Dow closings, which are then extented out by varying "extension periods" (the part where the orange line extends beyond the blue line).

    I also ran this for US GDP, which I've shared below also. I also ran this for a number of other series and data points, but I shall keep those to myself for now :)

    I shall leave interpretations of these charts up to yourself. I have formed my conclusions and shall take positions in the market if and when the market appears to reveal these as being true.

    A couple of final notes:
    1. In all charts, the blue line is the most recent market activity, orange is the historical data.
    2. In all charts, the numbers on the X axis (bottom) are the number of observations.
    3. Historical data doesn't mean you can predict the future - but you know that from all the disclaimers you've read!... Right?

    Dow Jones  1 Year Correlation

    Dow Jones  2 Year Correlation

    Dow Jones 3 Year Correlation

    Dow Jones 4 Year Correlation

    Dow Jones 5 Year Correlation

    Dow Jones  5 Year Correlation

    Dow Jones  7 Year Correlation

    GDP 5 Year Correlation

    GDP 5 Year Correlation

    Government Treasuries: Market Leading Indicators or Definers?

    September 2 2011, 3:37pm in Random Stuff
    0 Comments

    Treasury investors are frequently described as being smarter than equity investors because treasuries and generally good leading indicators i.e. the price or yield on treasuries can sometimes be used to predict future economic activity, or at least the trend thereof.

    When the economic outlook appears to be bleak or risky, some investors "flee" to treasuries, which are essentially guaranteed loans to the government. This is called a "flight to safety", because the risk of losing money (in nominal terms, at least) is lower in treasuries than in equities or other investments.

    However, one must be reminded of the "chicken and the egg" problem. As there is a flight of capital to "safe" treasuries, less investment and venture capital is then available to functioning business and those wanting to start new business. More businesses will therefore fail, and many new businesses will never get started.

    So, at least some of the weakened economy must due to this flight to capital.

    If you ask me, the US government should not be allowed to issue treasuries... With over $14.5 Trillion current outstanding debt, the government have borrowed enough (on our behalf) and has absolutely no business borrowing any more!

    Eliminate treasuries and free up some capital for those that will use it more wisely!

    Reciprocal Links - Are They Good Or Bad?

    June 27 2011, 11:24am in SEO
    3 Comments

    Ah... reciprocal links. This is definitely one of the more polarized topics in the SEO world. Some SEOs will tell you that they are mega black-hat, punishable by instant death in search engines. Others will tell you that they are absolutely fine and encourage them. As with most things, reality lies somewhere between these to points of view.

    What is a "Reciprocal Link"?

    In the most simple form, a reciprocal link is a link that points to a Website which in turn has a link back. So, if Site A links to Site B and then Site B links back to Site A, the links are called "reciprocal links".

    In contrast, if Site A links to Site B and Site B does not link back, then the link is called a "one-way link".

    So, What's Up With Reciprocal Links?

    Here's the truth: Only do reciprocal links when the two sites are of high quality and when the link is of value to the Website visitors. If you follow this principle, you shouldn't have any problems.

    For example, if you own a Website that sells bicycle equipment, it might make sense for you to link to a bicycle tire manufacturer's Website. Conversely, it might make sense for them to link back to you, if you sell their tires for example. Something like this is absolutely fine and passes the smell test.

    However, if your bicycle Website links to a casino or prescription drug Website and they link back, something about this smells fishy and Google would probably look at that very closely. If you have multiple links like this, you are almost certainly in trouble.

    But... Don't Forget About Link Farms

    Site-to-Site reciprocal links may be fine if the site quality is high and they are in some way related, however one thing you definitely should try to avoid is getting involved in link farms or complex linking schemes. For example, if you own 10 sites and you try to link them altogether in a daisy-chain type of way, or if you have all of them reciprocating links with each other (even if they are related), then Google may think that you are trying to manipulate ranking, and again you could find yourself in deep trouble.

    Summary

    Remember: Only link to high quality sites that are related to yours, especially if the link is going to be reciprocated. And don't get involved with daisy-chain linking or cross-linking sites.

    Google's Strategic Realignment - Closing Google Health & DoubleClick Ad Planner Marketplace

    June 25 2011, 2:07pm in Google
    0 Comments

    In recent years, Google have diversified their portfolio of products and services in many ways. Just think of all of the different labels that fall under the Google brand:

    That's a LOT of stuff. And there's a lot of other stuff they have going on also. So it's no surprise to learn that Google is closing two of their services: Google Health and the DoubleClick Ad Planner Marketplace .

    The question is... Are Google going through a strategic realignment of the business. They have strayed very far from their core competencies with all of these add-on products and services. Some are very noble and interesting, but many are complete failures in market-uptake terms. Focusing on too many things makes it very easy to become scatter-brained and very difficult to achieve success in any one area.

    Now that Larry Page is the CEO after taking the helm from Eric Schmidt, it will be interesting to see if and how Google's direction changes. Of course, Google have phenomenal brain-trust. I can't want to see how they use it.

    Identifying When Your Site Ranks, but You Don't Get the Click

    November 10 2010, 11:57am in Google
    3 Comments

    Yesterday I posted a video showing a "prefetch" hit coming from the Google SERPs. The bad news is that this turns out to be nothing new; I thought it may have been related to the recent page previews feature - It wasn't.

    The good news is that there's an opportunity here, which I don't think has been discussed before.

    To summarize: When you use the integrated search box in Firefox (top right) to search Google, Google will sometimes use a rel=prefetch attribute on the first result link. This causes Firefox to automatically download the HTML source code of the first result page. Along with the request to download the page, the referrer string is sent. This is great, because we have a hit to our server without the visitor ever clicking on our result.

    So, what is the opportunity and how can we take advantage of it?

    Simple. Because we have a hit directly from a Google SERP (along with the referrer string), we can identify searches where our site is ranking. If we don't get the click, then we know we have a problem.

    So, let's go through this quickly so we can all get back to work.

    In the Apache HTTP server, tell the server that you want to track only prefetch hits to a separate log file:

    SetEnvIf X-moz prefetch Prefetch_Request   #Set a variable when prefetch
    CustomLog /path/to/prefetch_log combined env=Prefetch_Request   #Log the request if prefetch
    

    Then restart the server. This will log any requests that come to the server with an X-mox: prefetch header to the /path/to/prefetch_log log file.

    You can look through the data in this log file to determine where you're showing up in SERPs. If you compare this data to your normal log file, you will be able to identify where you don't get the clicks from the Google SERP.

    What you do with this information is up you. I know what I would do - optimize the title and meta description/snippet to maximize the click-through rate.

    VIDEO: SEO - Google Log Files Phantom Hits - Nov 9 2010

    November 9 2010, 4:04pm in Google
    0 Comments

    I found an odd issue with Google and Firefox today. I noticed that on some occasions, the first result on the SERP was GET'd by Firefox, despite the fact that I didn't visit the site. Cache was cleared, and it only happens when I use the firefox search bar.

    Video:

    (In listening to it myself again, I see that I messed up my lefts/rights when talking bout the windows... will be sure to pay more attention to that in the future :) )

    Postscript: After some more digging and it appears that it is actually Google that is causing the prefetch. They do it by using the rel=prefetch attribute on links they want prefetched, and Firefox then goes and gets them.

    That means that hits from Google could be inflated. For those with much higher traffic, the margin of error could be magnified.)

    In Cache: An Awesome Idea For Google and Other Search Engines

    November 9 2010, 1:35pm in Random Stuff
    3 Comments

    As I was looking at the Google cache of a page, I noticed that the layout was a bit weird. The issue was that the cache date was Nov. 1 and the site had undergone updates on the 3rd or so. The updated external CSS files weren't playing well with the old cache page.

    So I hit refresh and noticed that the cache date was now Nov. 4th, and the page looked fine, as the page from Nov. 4th was designed for the updated external CSS files. So I hit refresh a few more times and noticed I was able to randomly toggle between two different versions; The cached page from Nov. 1st and the page from Nov 4th. So, Google obviously stores cache in various different places. This gave me an idea.

    It has been said before that Google has kept copies of all of the different indices it has ever created. If this means what I think it means, then they should have all of the different cached copies for every URL that Google has every crawled. OK, so maybe you know where I'm going with this, but keep reading anyway...

    I'd like to introduce you to... Google "Versions" (or possibly Google "Timeport"). I'm going to use Google as the example search engine in this case. I'm sorry Bing - This could equally apply to you, but I spend most of my day worrying about Google.

    Google Versions Logo

    Imagine this:

    1. A user submits a search query.
    2. The standard SERP is produced.

    BUT... each result has an extra link called "Versions" beside the standard "Cached" and "Similar" links.

    Google Versions Search Engine Results Page (SERP)

    When you click on this "Versions" link, you are presented with a list of the dates for which Google has a cached version of the page. You click on a date and get the cached version of the page on that date.

    Google Versions Cache Dates

    Yes, this is basically the concept of archive.org aka the "Wayback Machine", except that archive.org does a relatively bad job of crawling pages often enough for it to be useful. I say "relatively" because they obviously don't have the resources of a company such as Google or Microsoft. So perhaps archive.org does a fantastic job given their resources, but they're terrible when compared to either of the aforementioned companies.

    There's a couple of problems with the idea of Google Versions. First, Google right now caches only the HTML. All of the inline/embedded elements in the cached code, such as CSS, JavaScript, Images, etc., are relative to the original URI of the page. So, if you were to view an old cache version of a page and some object that is referenced from within that page has since been removed from the server - or even just modified on the server - then the page will likely be broken to some degree or another.

    To workaround this problem, Google would have to synchronously cache the page itself and all of the objects referenced therein. To my knowledge, the Google cache system simply does not work this way at this time and it would probably require a rewrite. We know that Google crawl images and also CSS and JavaScript files, but I don't know the extent to which any of these are cached, or whether they are cached synchronously with their parent pages.

    But, archive.org has synchronously cached pages and their associated objects for years, so presumably Google could do it also if they were so inclined.

    Another problem is the potential copyright issues, but I don't see this really being a hurdle - especially in the US. The robots.txt is the defacto standard for exclusion from search engines. A separate User Agent string could be used for Google Versions e.g. "VersionsBot". Also, the meta robots noarchive tag should also prevent a page from being indexed in the Versions archive.

    It's an interesting idea that I would like to see Google or Bing introduce. It's certainly in line with Google's mission statement to "organize the world's information and make it universally accessible and useful."

    What are your thoughts?

    A Quick Refresher on SEO and 301/302 HTTP Redirects

    November 5 2010, 12:45pm in SEO
    2 Comments

    By Darrin J. Ward

    I'll preface this refresher on 301 and 302 HTTP redirects by saying that we always strive to plan the layout of Websites so that we will never need to move or rename pages. You should try to do the same!

    However, sometimes it's unavoidable and pages need to be moved or removed entirely. When that happens, it's very important that the right strategy be used to "redirect" the page from the old URL to the new URL. There are two viable options for doing page redirects: a 301 redirect or a 302 redirect. The 301/302 number refers to the "status code", which is sent by the Web server to robots/crawlers/browsers, informing them of what action is being taken.

    A 301 redirect code indicates that the move is PERMANENT. This type of redirect should be used when you know that the page WILL NOT move back to its original location.

    A 302 redirect code indicates that the move is TEMPORARY. This type of redirect should be used when you know that the page WILL eventually move back to its original location, or somewhere else.

    In the vast majority of cases, it's the 301 redirect that you will want to use. The 301 redirect passes most of the PageRank and "link juice" from the old page address to the new page address, which is exactly what you want to help maintain rankings and PageRank. I will point out that there is going to be some loss due to the 301 redirect (our internal research suggests anywhere from 10-25% is lost), which is why it's best to avoid redirects altogether, where possible.

    302 redirects do not maintain the PageRank and link juice in the same way.

    Implementing 301 and 302 Redirects

    The implementation of these redirects is usually done at either the server level or in the programming code. Here are some samples.

    301 Redirect in PHP

    I wrote the following simple function to allow me to perform 301 redirects:

    function Redirect301($GoTo) { header("HTTP/1.1 301 Moved Permanently"); header("Location: $GoTo"); }

    This function can be called in your php code like this:

    Redirect301("http://www.example.com/new-url/");

    302 Redirect in PHP

    PHP has a built-in function called redirect() which does a 302 redirect. You can call redirect() like this:

    redirect("http://www.example.com/new-url/");

    301 Redirect in Apache using .htaccess or httpd.conf

    Using the HTTP server itself is a popular way to perform redirects. Because I use Apache most of the time, I'll limit the discussion to the .htaccess and httpd.conf methods for Apache, and leave IIS and other servers alone for now. The easiest way to perform a 301 redirect is to use the Redirect directive, which can be used like this in .htaccess or httpd.conf:

    Redirect Permanent /old-page.htm /new-page.htm

    302 Redirect in Apache using .htaccess or httpd.conf

    302 redirects also use the Redirect directive, but without the Permanent flag. i.e:

    Redirect /old-page.htm /new-page.htm

    Testing Redirect Response Codes

    You should always check that the server is sending the correct redirect response code. Don't just assume that your redirects are working correctly. Here are a couple of tools that I recommend for testing HTTP redirects (and server headers in general):

    Website Loading Times and Google's Apache mod_pagespeed

    November 4 2010, 8:00pm in Google
    1 Comments

    By Darrin J. Ward

    Hi, my name is Darrin Ward and I'm addicted to speed. Speedy Websites, that is.

    I've always been addicted to making the Internet go as fast as possible. I distinctly remember playing around with the physical positioning of my first 28.8k modem to see if moving it away from the electrical / magnetic interference of the computer and monitor would speed up the Internet, or if tinkering with all kinds of other software and hardware settings sped it up any.

    In fact, I love speed so much that sometimes it could actually be considered for a flaw. If you've ever seen a Website that I personally have designed, then you'll know exactly what I mean. Although the Darrin Ward design team does fantastic designs, all of my personal designs tend to be stripped down to almost text-only sites, because I want things to load as fast as possible, and graphics and rich-media only slow it down.

    It turns out that we've found a great balance in the team; My team comes up with visually stunning designs, and then I go to work on making the site load stunningly fast, with minimal sacrifice to the original design. It's a win-win.

    Anyway. I also happen to practically live inside of Apache httpd.conf and .htaccess files (the configuration files for the Apache Web Server). So when I recently heard that Google released an apache module called mod_pagespeed, I was ecstatic.

    The apache httpd module is for version 2.2 and higher, and it takes care of some of the items that Google's Page Speed Firefox plug-in addresses.

    I haven't had time to experiment with mod_pagespeed yet, but it will be interesting to see how it holds up under high load. Because of our success with SEO and Internet Marketing, some of the sites we manage get a LOT of traffic (top site peaks at about 2,000 hits per second!). The other issue is that it's not yet listed as compatible with FreeBSD, which is my server OS of choice (I love the HTTP accept filter), but I'm sure it can be made to work.

    The long and the short of it is this... Here at this SEO company, we are 100% committed to making Websites fast because we know it's important for visitors, and because it helps Google rankings now that page loading speed is part of the Google algorithm. We're happy to see that Google is providing Webmasters with tools to help make the Web faster (and save bandwidth).

    Google Again Stops Passing Keywords in Referring String

    July 14 2010, 11:58am in Google
    0 Comments

    Every now and then we are reminded how much we rely on Google.

    Over the past couple of years, Google has done some experimentation with using AJAX search engine results pages. In doing so, such experiments have broken the keyword-tracking functionality of analytics tools because a full set of referrer data was not being sent (anything after the # in a URL is not part of the referrer string). Once Google were made aware of the issue, they rolled back and things went back to normal, where analytics packages could continue tracking referring keywords.

    The visual change of the AJAX implementation was unnoticeable to an end-user, but the impact for Webmasters was immense. Earlier this week, it was noticed that Google was again not passing referrer strings. Matt Cutts of Google commented to let us know that the changes were an error on Google's part due to repurposing of old code. However, ever since the first implementation of the AJAX results pages, my Firefox installation on my Mac continues to show me AJAX results pages, not the regular ones. So, I have no doubt that there is still some Google traffic out there that is not sending referrers/keywords correctly.

    It's just another reminder that if Google did want to make this change and not listen to us, they could really cause a big nightmare for all of us SEOs out there!

    Google Penalty for No rel=nofollow on Affiliate Links

    September 24 2009, 10:11am in SEO
    6 Comments

    Barry Schwartz over at the Search Engine Roundtable reminds us today that you should use rel=nofollow on your affiliate links, or else you may receive a Google penalty.

    The inherent illogic of stuff like this makes my blood boil sometimes... Why would/does Google penalize content from ranking just because links to affiliates or other sites do not use rel=nofollow? Either the content on the page is useful and it deserves to rank, or it doesn't. I don't see why having links lacking rel=nofollow alone should be a determining factor in that decision. Using rel=nofollow is a technicality.

    If Google determines that the links on a page are against their paid-linking policy, then they should just discount any "link juice" that might get passed on from them. That's something they could do transparently in the background, without having to force Webmasters to consider this ridiculous rel=nofollow tag, and without having to deprive searches of valuable content (assuming Google otherwise determined it to be valuable except for the non-rel=nofollow affiliate links.)

    Alas, the Google insidiousness continues, and we continue to begrudgingly comply forthwith so that we may get some rankings love! Although the whole thing does remind me of the pied piper sometimes :)

    Why Doesn't Google Have a Dictionary? Still Link to Answers.com

    September 21 2009, 10:34am in Google
    1 Comments

    I sometimes use Google as a dictionary replacement, as I suspect a lot of people do. I search for the word on Google and then click on the "definition" link beside the word in the horizontal blue information bar. Google links to answers.com, which gives the definition:

    Google's Answers.com Definition Link

    What I don't understand is why Google hasn't licensed the content from the Oxford dictionary or some other dictionary and made their own dictionary function. Probably a better idea would be to license the content from multiple dictionaries to make sure they have all of the right definition variants, including those that are regionally specific.

    Granted, Google does have the "define:keyword" operator that attempts to define words by scraping content from pages across the Web. But, anyone that has experimented with this function to any degree will tell you that it can be horridly inaccurate. I've often seen it pull definitions from adjacent words on pages, yielding a completely irrelevant definition.

    It should be noted that, according to compete.com, Google is responsible for 61.19% of answers.com's traffic:

    Answers.com Referrals

    It's not clear how much of this comes from the definition links and how much comes from regular organic listings. Either way, that's a pretty significant share of the traffic.

    Hey, Google... If I set up a dictionary site, will you link to me instead?!

    Google Stepping Up to Counter Bing's Growth?

    September 18 2009, 11:01am in Search News
    0 Comments

    First, sorry for the lapses between blog posts. The unfortunate reality is that the blog does take something of a backseat versus servicing clients, business development and doing all of the interesting things that are going on with the De Ward Group right now.

    Is it just me, or has Google really stepped up their efforts over the last few weeks and months? They've made many changes to their services, from minor UI tweaks, experimenting with different ad formats and organic listing formats, right up to introducing entirely new products such as FastFlip in the Google Labs.

    Granted that Google has always been experimental, but one has to wonder if perhaps they are pushing things a bit harder now to counter the fact that Microsoft's search solution - now known as Bing - has crossed 10% market share for the first time in as far back as anyone cares to remember. These Nielsen ratings only came out the other day, but it was obvious that Bing had been gaining some traction. And knowing that Microsoft may very well soon accrue a on of traffic from Yahoo - assuming the deal gets regulatory approval - can only be increasing the pressure on Google.

    These are very interesting times for us here in the world of search engines. I very nearly fully exited the SEM industry back in 2003 when sold the SEOChat.com company. Even though I did make a conscious decision to become a less public figure, I'm very glad that I chose to stay within the industry. We've got one hell of a roller coaster ride ahead of us in the coming 2 years, and I would hate to miss it.

    Bing still have a long way to go. But as much as I love Google, I have to hope that Bing can step up to the challenge and give Google a run for their money, because competition is good for you, me and everybody.

    One-Way Folder Syncing: Mac to Blackberry Folder Sync

    August 27 2009, 5:55pm in Random Stuff
    0 Comments

    I use a BlackBerry 8820. I've got an iPhone, used to have a Sony Experia X1 (Windows Mobile) and I have tried a plethora of other phones (including other BlackBerry's), but the BlackBerry 8820 is the one for me.

    However, one thing that used to irritate me about the phone was that it wasn't very easy to sync my iTunes music podcasts and some business documents from my Mac to the SD card in my BlackBerry. So I wrote a simple shell script that takes care of those things for me. The script I show here should also work with other BlackBerry's.

    The script uses rsync to overwrite folders on my BlackBerry with folders on my Mac (like my iTunes folder). I saved the following code into a file named bb-sync.sh in my ~/ folder:

    rsync -u -v -I -r --delete "/Users/DWard/Music/iTunes/iTunes Music/Podcasts/" "/Volumes/BB/iTunes/Podcasts/" &&
    rsync -u -v -I -r --delete "/Users/DWard/Desktop/WalkMusic/" "/Volumes/BB/iTunes/Music/" &&
    rsync -u -v -I -r --delete "/Users/DWard/Documents/Passwords.kdb" "/Volumes/BB/Documents/Passwords.kdb"

    The formatting of the above code may look weird owing to linebreaks, so you can also Download bb-sync.sh. (REMOVED)

    This is 3 separate rsync commands because I am syncing 3 folders. On each line, the first reference to a file or folder is local on my mac and overwrites the second stated file/folder, which is on my BlackBerry SD card (they all start with "/Volumes/BB/").

    My blackberry SD card mounts as a volume named "BB". Yours will probably mount as something else, but you can check by using "cd /Volumes/" in Terminal when your device is connected to see what name it uses when it mounts. You may need to plug it in and out to see the differences between mounted/unmounted states. The Volume name will probably also show up on the OS X Desktop as a drive when your BlackBerry SD card mounts. Substitute BB for the name of your BlackBerry SD card volume and change the directories that you want to sync.

    When my BlackBerry SD card mounts, I sync by opening up terminal and typing "sh bb-sync.sh" and it prints out a report of the files it's deleting and new files it's uploading.

    There are two last things that I will say: 1) That this technique will work for any mounted volume; it's not specific to BlackBerry, and; 2) You can fiddle with the rsync flags and options to get a two way sync, or some other functionality. But I'm not going to bother with that. See the man page for rsync if you want to do something other than what I have described here.

    Now if you'll excuse me... I am going to listen to some recently synced podcasts on my BlackBerry while I go for my evening walk. :)

    Google Changes Homepage: Preferences Now Search Settings

    August 27 2009, 11:13am in Google
    11 Comments

    I change the number of results-per-page setting quite a lot, and I just noticed that the "Preferences" link on the Google homepage that I normally use to change my results-per-page setting is missing. Instead, it is now located in a "Settings" drop-down menu up at the top, named "Search Settings".

    Before, the "Preferences" link was to the right of the search box, I believe under "Language Tools". In the words of Stewie Griffin... "I don't like change".

    Take a peek:

    Google Preferences Link Moved to Search Settings

    Edit: The "Settings" link at the top will be a drop-down to include both "Search Settings" and "Account Settings", if one if logged into an account. If one is not logged into an account, the "Settings" link will change to read "Search Settings", and it will go directly to the settings/preferences page.

    Google Showing Sitelinks for AdWords Sponsored Links

    August 25 2009, 6:10pm in Google
    5 Comments

    I'm not sure if this is a new thing or not, but I just searched on Google for "staples.com" and I noticed that the Staples paid listing at the top was sporting sitelinks... something that I don't recall seeing before. Upon refresh, they were gone.

    Take a look:

    Google Sponsored Link With Sitelinks

    Google Street-View, StoreFront Barcodes & Extended Store Details

    July 27 2009, 6:01pm in Fun Stuff
    1 Comments

    BBC has a cool article piece and video today on the potential future of barcodes, called Bokodes. Bokodes are basically very small but versatile version 2.0 barcodes that can hold lots more information than traditional barcodes.

    What's very interesting however is the mention of integration with Google street-view, where the small bokodes posted on the storefronts could be read by the Google street-view camera as they go by. The bokodes could contain information such as menus, hours of operation, etc... This is pretty amazing stuff and it really is a glimpse into the future. This is really taking digitization to the next level and I wholly encourage you to check it out!

    BBC Article: Barcode replacement shown off.

    West Coast Airfares Rising Faster than East Coast Airfares?

    July 27 2009, 1:32pm in Random Stuff
    0 Comments

    I absolutely loathe flying. I'm not scared of flying (in fact I find takeoff and landing to be quite exciting), but rather I just find the whole experience of public air travel to be utterly deplorable, and frankly, disgusting! Airports are congested, people on planes have zero personal hygiene, getting the shakedown at airport security, etc. is just an invasion of my personal space that I rather not endure, which is why I only fly when it's absolutely essential.

    However, there are a couple of upcoming projects for which I may have to travel, so I was recently looking at some airfares, which is why I was very interested to see that the bing travel blog has an interesting recent post about airfares rising faster on the west coast vs. east coast. Cross-country fares have also risen by a whopping 23% over a 4 week period.

    Anyway, I just thought it was interesting enough to share. Personally, I think I'm just going to line up as many conference calls as I can over a 2 day period and drive where I need to go rather than fly. That way I can still get work done and I don't have to fly. Unfortunately, I was hoping to make an international trip and I may just have to concede and fly, because I can't drive and I can't afford to charter a large yacht (though I would if I could before flying).

    Bing / Microsoft Ramp Up Usage of msnbot/2.0b

    July 20 2009, 12:03pm in Search News
    0 Comments

    Rick DeJarnette posted on the Bing blog on Friday that we should expect to see an increasing number of visits from msnbot/2.0b, a second (or third?) generation of the crawler used to power Bing.

    This really isn't all that helpful to us Webmasters. In all reality, if they didn't tell us that they were testing a new crawler and didn't change the UA string (1.1. to 2.0b), we probably wouldn't have ever noticed.

    Why do they still call is msnbot though? Shouldn't it be BingBot? I wonder if Microsoft has an identity crisis.

    Facebook Consumes Most of Americans' Online Time

    July 16 2009, 12:16pm in
    0 Comments

    PC world underscores Nielsen's report this week that American's spend more of their time on Facebook than any other Website. Quite an achievement for Facebook, but the really interesting part of the Facebook phenomenon is that they are penetrating the age 55+ market more successfully than social media sites.

    Facebook is a great site and I used to be an avid status updater, but for me it became boring pretty quickly. There has been a lot of recent talk about Facebook's direction, whether or not they can successfully become profitable (including a recent comment by Facebok board member Mark Andreessen stating that the site would be posting billions of dollars in revenue in the coming years.).

    It will be really interesting to see how things pan out, but I can help but think that all of this is just hype. Rather few social sites have generated anything near where lofty expectations were.

    How Important is an ODP/DMOZ Links for SEO?

    July 6 2009, 2:33pm in SEO
    5 Comments

    If you've been in the Internet Marketing industry for any length of time, then you will have heard of the "ODP" or "DMOZ", the Open Directory Project that resides at www.dmoz.org. The ODP is a large general Web directory edited by volunteers. And for years it was considered almost the holy grail for inbound link developers. Some still consider it to be so.

    A member at WebmasterWorld asks "Is DMOZ still relevant in 2009?". The responses are interesting.

    As part of our SEO campaigns, we do perform directory submissions to a select number of top-tier general directories and a small number of niche directories (the number depends on the niche). The ODP is still in the top 3 of our most desirous general directory link acquisition targets. But it's certainly not a holy grail of any sort.

    The ODP certainly has is problems. It's very slow to get anything listed in the ODP due to the lack of editors/volunteers as compared to the volume of submissions they receive. Internet users seem to be tending away from directory type Websites and converging on social/search type sites. And, ODP hasn't done anything even remotely innovative in years (in fact, I don't know if they've done anything innovative, ever.)

    But the ODP still gets used in countless places across the Web. So a listing/link in the ODP inherently means links from many other places. The value of the ODP link itself probably carries more weight than all of the subsequent links combined, but it's still a positive.

    Yep - for me submitting to the ODP is still relevant in 2009. Not as much as it used to be, certainly. But it's still relevant. I do however recommend that you read my insights on submitting to directories for SEO.

    Putting multiple Lat/Long Points on a Google Map

    June 26 2009, 5:52pm in Fun Stuff
    14 Comments

    Latitude Longitude Coordinates / Points Mapping Tool

    I was having a tough time finding a tool to map lat / long coordinates for a project. I needed to just copy and paste a bunch of coordinates and have points show up on a map, but I couldn't find a tool to do that, so I put a simple lat/long point mapping tool together.

    Simply paste your geo-coded coordinates into the text box and hit submit. The page will reload and your points will be mapped. However, you must be sure to provide the latitude and longitude coordinates in the right format (this was just for internal use so I didn't bother with formatting or error checking). The right format is to use ONE combination lat,long per line. Separate the lat/long with a comma and there should be no spaces.

    Latitude Longitude Coordinates / Points Mapping Tool.

    Changing from 'Remember Me on this Computer' to 'Stay signed in'

    June 24 2009, 9:12pm in Google
    1 Comments

    Here's a very small but interesting change. Google has changed the text label beside the checkbox on the Google account login form that keeps the user signed into their Google account. It's changed from "Remember me on this computer" to "Stay signed in".

    I wonder if the previous label was confusing people. Good to see their experimenting with little usability things.

    Here's how the login form looks now:

    New Google Account Login Form Checkbox

    And here's what it used to look like:

    Old Google Account Login Form Checkbox

    Start your persistent cookies, get set. GO!

    Reports of New Google PageRank (PR) Update Already (June 2009)

    June 24 2009, 4:08am in Google
    0 Comments

    A thread over at WebmasterWorld has some reports of a PageRank (PR) update going on with many people seeing new PR values, just one month after Google did their last PR update.

    It will be interesting to see how this plays out during the course of the day... is this an incremental update or a full PR update? Are Google returning to their monthly PR update cycle from many years ago? It's not impossible... as more new content is generated, people want to see the PR values for those pages. Leaving PR updates on cycles of just 6 months leaves huge gaps in the number of pages for which Google has missing PR values. That can reflect poorly on Google.

    Updated SEOmoz SEO Best Practices / Policies

    June 23 2009, 2:23pm in SEO
    0 Comments

    SEOmoz has published some updated SEO best practices guidelines. The guidelines are apparently based on "correlation data", which means that they looked at rankings and analyzed the different components on the ranking pages.

    The list of SEO best practice items gives recommendations for:

    • Title Tag Format
    • The Usefulness of H1 Tags
    • The Usefulness of Nofollow
    • The Usefulness of the Canonical Tag
    • The Use of Alt text with Images
    • The Use of the Meta Keywords tag
    • The Use of Parameter Driven URLs
    • The Usefulness of Footer Links
    • The Use of Javascript and Flash on Websites
    • The Use of 301 Redirects
    • Blocking pages from Search Engines
    • Google Search Wiki's Affect on Rankings
    • The Affect of Negative Links from "Bad Link Neighborhoods"
    • The Importance of Traffic on Rankings

    This is great stuff, but as with everything in the "SEO" world, it needs to be taken with a pinch of salt. Each element that gets analyzed essentially introduces another unknown variable into a simultaneous equation.

    One of the most interesting items is that H1 tags have been reduced to having nearly no importance in search engines. What I'm wondering is whether or not SEOmoz also looked at the CSS styling for the H1s to determine if H1s styled to a smaller font carry less weight, or if the reduced importance of the H1 is blanketed. We know that Google look CSS and JavaScript.