URL


Online SEO Tutorial | Articlezeneu

URLs are also an important criterion of onpage optimization. Locate a website unique and can provide valuable information about the page they represent contain. engines evaluate the text that is in a URL. This evaluation can be, for example, recognized the fact that in terms of the representation of the SERPs are marked in bold in the URL. Therefore, the keywords for which the page is to entwine, also in the URL should be located. It has become naturalized in practice to use a similar wording as in the title, but stop words such as "and", "of", "the", etc., should be avoided. Generally, the rule applies here: As short as possible and as long as necessary.

Separators in URLs


In URLs but can be used in contrast to the title, not all characters. Instead, URLs are allowed only in the Uniform Resource Identifier (URI): Generic Syntax defined characters. If several words are to be separated in a URL, Google recommends the use of the minus sign , but also include the following characters as separators are detected: ', (,), *, +, [comma], /,: , =, @.

Minus / hyphen or underscore - which is better for SEO urls?


The most commonly used as a separator, the minus sign - used in URLs or the underscore (_) (). The latter should be avoided, because this is not recognized as separators by Google. This is because the underscores for variable names are used in programming languages ??and Google wants to search for it explicitly allows. It is quite interesting to note that existing projects that already use the underscore version for URLs that should not necessarily be changed because of the influence is generally quite low. This information was Matt Cutts in the following video from 16 August 2011 known:


URL parameters


Some content management systems, the contents of web pages are stored in a database and the URLs for these pages are dynamically generated. The ID of each database entry in the URL is then often passed as a paramater, which for example leads to the following URLs:

http://www.pqr.com/article.php?id=123
http://www.pqr.com/forum/thread.php?tid=5
http://www.pqr.com/index.php?category=5&subcategory=3

Especially in the last example, we see very clearly that a search engine can extract any URLs from such information. In order to solve this problem without sacrificing the comfort of dynamically generated content, there are basically two approaches.

Definition of Slugs


One is to a so-called Slug define. Thus, a character string is meant a data set uniquely identify. As a result, a character string can now be transferred instead of the numerical ID. However, this method has two drawbacks, firstly because based search operations in databases are slower than those with numerical basis on strings and on the other, a change of Slugs in hindsight to non-availability of a page formerly known URL.

Use of mod_rewrite


The other approach is based on the Apache mod_rewrite module This module allows the evaluation of URLs using regular expressions with the help of a URL is parsed to certain parameters, so that these can then be passed to a web page. This makes it possible for example to accommodate redundant characters in a URL.

Assuming that http://www.pqr.com/article.php?id=123 an article on search engine optimization contains, it would be more useful if the URL http://www.pqr.com/article/seo.html (or similar) would read. This form can not be achieved completely, since at least the ID of the database entry must be included. The resulting URL but could for example http://www.pqr.com/article/seo, 123.html loud. This project opens the ID to call back because the weighting of the keywords as well as the title of the front decreases to the rear.

Need to make this example run a. Htaccess file to be created, which enabled the mod_rewrite and defines a corresponding rule. The following code snippet shows this for the example above.

# Enable module
RewriteEngine On
# define rule
RewriteRule ^ articles /. *, (*.) article \. php id = $ 1 [QSA]

The only problem with this solution is that a website is accessible through multiple URLs now, because every character after the forward slash and before the comma here is arbitrary. For search engines, however, a different URL is the same as any other website, so here's a duplicate content problem arises.

This entry was posted in ,,. Bookmark the permalink.

Leave a Reply

Powered by Blogger.