long been arguments about whether the page code needs to be 'correct' to ensure
a good ranking with the search engines.
Some say that getting page code
verified to W3C standards is a total waste of
time, and while they are factually correct, they entirely miss the point.
engines do not require code to be 'correct'. There's no part of Google's algo
that deducts marks for using bold instead of strong, etc. That's been said, umpteen
But we do not know all the factors SEs use, or how they are prioritized,
and poor code may deny the SEs ranking info they need, send them confusing messages,
waste their time etc. A spider may see a sloppy-coded site like wading through
mud. It may lose concentration - or give up and go home.
So it is entirely
possible that a poorly-coded site will not do so well.
It isn't possible
to make a blanket statement, as we just don't know exactly what combination of
problems can damage a site, but sick site syndrome can include sites with code
bloat, poor navigation, over-optimisation, poor unique to total ratio and
many other code-related issues
The effects include duplicate issues or less
depth to the spidering, suplementary issues, poor rank, poor ranking or whatever.
it may be simple things. For example: a hand coded page usually has a meta description,
if only because it doesn't occur to the writer not to do it. A cms page may have
no md, or the same as 10,000 other pages. Result? supplementary listing or worse.
I'm generalising again; it's not an area where you can ever be specific without
looking at the site. It's lkely not cms that's wrong, but the way cms is set up
- or the way it isn't checked for errors!
So while perfect code is not required,
sloppy code is asking for trouble. And machines often generate sloppy code.