This action might not be possible to undo. Are you sure you want to continue?
Introduction If you have spent any significant amount of time online, you have likely come across the term Black Hat at one time or another. This term is usually associated with many negative comments. This book is here to address those comments and provide some insight into the real life of a Black Hat SEO professional. To give you some background, my name is Brian. I've been involved in internet marketing for close to 10 years now, the last 7 of which have been dedicated to Black Hat SEO. As we will discuss shortly, you can't be a great Black Hat without first becoming a great White Hat marketer. With the formalities out of the way, lets get into the meat of things, shall we? What is Black Hat SEO? The million dollar question that everyone has an opinion on. What exactly is Black Hat SEO? The answer here depends largely on who you ask. Ask most White Hats and they immediately quote the Google Webmaster Guidelines like a bunch of lemmings. Have you ever really stopped to think about it though? Google publishes those guidelines because they know as well as you and I that they have no way of detecting or preventing what they preach so loudly. They rely on droves of webmasters to blindly repeat everything they say because they are an internet powerhouse and they have everyone brainwashed into believing anything they tell them. This is actually a good thing though. It means that the vast majority of internet marketers and SEO professionals are completely blind to the vast array of tools at their disposal that not only increase traffic to their sites, but also make us all millions in revenue every year. The second argument you are likely to hear is the age old ,“the search engines will ban your sites if you use Black Hat techniques”. Sure, this is true if you have no understanding of the basic principals or practices. If you jump in with no knowledge you are going to fail. I'll give you the secret though. Ready? Don't use black hat techniques on your White Hat domains. Not directly at least. You aren't going to build doorway or cloaked pages on your money site, that would be idiotic. Instead you buy several throw away domains, build your doorways on those and cloak/redirect the traffic to your money sites. You lose a doorway domain, who cares? Build 10 to replace it. It isn't rocket science, just common sense. A search engine can't possibly penalize you for outside influences that are beyond your control. They can't penalize you for incoming links, nor can they penalize you for sending traffic to your domain from other doorway pages outside of that domain. If they could, I would simply point doorway pages and spam links at my competitors to knock them out of the SERPS. See..... Common sense. So again, what is Black Hat SEO? In my opinion, Black Hat SEO and White Hat SEO are almost no different. White hat web masters spend time carefully finding link partners to increase rankings for their keywords, Black Hats do the same thing, but we write automated scripts to do it while we sleep. White hat SEO's spend months perfecting the on page SEO of their sites for maximum rankings, black hat SEO's use content generators to spit out thousands of generated pages to see which version works best. Are you starting to see a pattern here? You should, Black Hat SEO and White Hat SEO are one in the same with one key difference. Black Hats are lazy. We like things automated. Have you ever heard the phrase "Work smarter not harder?" We live by those words. Why spend weeks or months building pages only to have Google slap them down with some obscure penalty. If you have
A search engine or IR (Information Retrieval) system comprises four essential modules: ∗A document processor ∗A query processor ∗A search and matching function ∗A ranking capability While users focus on "search. That's when it came to me. ∗Stems terms. does nothing outwardly wrong or evil. so why play by their rules? In the following pages I'm going to show you why the search engines rules make no sense. This section is going to get technical as we discuss how search engines work and delve into ways to exploit those inner workings. and inputs the documents. ∗Breaks the document stream into desired retrievable units. It's frustrating. we've all been there. ∗Computes weights. So. pages. Each of these four modules may cause the expected or unexpected results that consumers get when they use a search engine. lets start with the fundamentals. The index consists of the words in each document. processes. yet their site is completely gone from the SERPS (Search Engine Results Pages) one morning for no apparent reason. and further I'm going to discuss how you can use that information to your advantage. Lets get started. . Who elected the search engines the "internet police"? I certainly didn't. ∗Deletes stop words. ∗Isolates and meta tags sub document pieces. plus pointers to their locations within the documents. or sites that users search against." the search and matching function is only one of the four modules. A web master plays by the rules. ∗Extracts index entries. Months of work gone and nothing to show for it. shall we? Search engines match queries against an index that they create. Document Processor The document processor prepares.spent any time on web master forums you have heard that story time and time again. The document processor performs some or all of the following steps: ∗Normalizes the document stream to a predefined format. Search Engine 101 As we discussed earlier. ∗Identifies potential indexable elements in documents. This is called an inverted file. every good Black Hat must be a solid White Hat. ∗Creates and updates the main inverted file against which the search engine searches in order to match queries to documents. I got tired of it as I am sure you are.
the software used to define a term suitable for indexing. are). -ed). perhaps recursively in layer after layer of processing. This step used to matter much more than it does now when memory has become so much cheaper and systems so much faster. -aciousness. It may negatively affect precision in that all forms of a stem will match. an algorithm compares index term candidates in the documents against a stop word list and eliminates certain terms from inclusion in the index for searching. analyzer. prepositions (in. Identifying potential indexable elements in documents dramatically affects the nature and quality of the document representation that the engine will search against. -es. Step 7: Extract index entries. the following paragraph shows the full text sent to a search engine for processing: Milosevic's comments. es.e. but). analyzing. A stop word list typically consists of those word classes known to convey little substantive meaning. or inter-word symbols such as hyphens or apostrophes that can denote the difference between "small business men" versus small-business men. For example. the). pronouns (he. over). For example.will have equal likelihood of being retrieved. Step 6: Term Stemming.so that documents which include various forms of analy. in fact. as well as potential matching. Step 5: Deleting stop words. This step helps save system resources by eliminating from further processing. -ed) and derivational suffixes (-able. conjunctions (and.Step 4: Identify elements to index." i. such as articles (a. Stemming removes word suffixes. it). and analyzed. In designing the system. this would not occur if the engine only indexed variant forms separately and required the user to enter all. but since stop words may comprise up to 40 percent of text words in a document. when. Of course. stemming improves recall by reducing all forms of the word to a base or stemmed form. The process has two goals. like "skunk works" or "hot dog"). analyzes. stemming does have a downside. the document processor extracts the remaining entries from the original document. interjections (oh. the document processor stems document terms to analy. and forms of the "to be" verb (is. -ability). To delete stop words. carried by the official news agency Tanjug. a successful query for the user would have come from matching only the word form actually used in the query. In terms of effectiveness. if a user asks for analyze. it still has some significance. but). Systems may implement either a strong stemming algorithm or a weak stemming algorithm. which the international community has called to try to prevent an all-out war in the Serbian province. Having completed steps 1 through 6. Therefore." Is it the alpha-numeric characters between blank spaces or punctuation? If so. cast doubt over the governments at the talks. we must define the word "term. they may also want documents which contain analysis." Each search engine depends on a set of rules that its document processor must execute to determine what action is to be taken by the "tokenizer. A strong stemming algorithm will strip off both inflectional suffixes (-s. In terms of efficiency. which in turn reduces the storage space required for the index and speeds up the search process. those terms that have little value in finding useful documents in response to a customer's query. stemming reduces the number of unique words in the index. multi-word proper names. what about non-compositional phrases (phrases in which the separate words do not convey the meaning of the phrase. while a weak stemming algorithm will strip off only the inflectional suffixes (-s. "President Milosevic said it was well known that Serbia .
since it occurs very often. however.and Yugoslavia were firmly committed to resolving problems in Kosovo. The output of step 7 is then inserted and stored in an inverted file that lists the index entries and an indication of their position and frequency of occurrence. peacefully in Serbia with the participation of the representatives of all ethnic communities." Tanjug said." More sophisticated document processors will have phrase recognizers." In a sports database when we compare each document to the database as a whole. Then it compares that frequency against the frequency of occurrence in the entire database. The simplest of search engines just assign a binary weight: 1 for presence and 0 for absence. the term "antibiotic" would probably be a good discriminator among documents." This algorithm measures the frequency of occurrence of each term within a document. Step 8: Term weight assignment. will vary based on the decision in Step 4 concerning what constitutes an "indexable term. to insure index entries such as Milosevic are tagged as a Person and entries such as Yugoslavia and Serbia as Countries. A less obvious example would be the word "antibiotic. . in a database devoted to health or medicine. Cook earlier told a conference that Milosevic had agreed to study the proposal. Milosevic was speaking during a meeting with British Foreign Secretary Robin Cook." This word appears in too many documents to help distinguish one from another. Weights are assigned to terms in the index file. as well as Named Entity recognizers and Categorizers. Measuring the frequency of occurrence of a term in the document creates more sophisticated weighting. The TF/IDF weighting scheme assigns higher weights to those terms that really distinguish one document from the others. The more sophisticated the search engine. all terms do not single out one document from another very well. "antibiotic" would probably be a poor discriminator. and therefore would be assigned a high weight. the more complex the weighting scheme. Extensive experience in information retrieval research over many years has clearly demonstrated that the optimal weighting comes from use of "tf/idf. Not all terms are good "discriminators" — that is. which is an integral part of Serbia. The specific nature of the index entries. with length-normalization of frequencies still more sophisticated. A simple example would be the word "the. Conversely. who delivered an ultimatum to attend negotiations in a week's time on an autonomy proposal for Kosovo with ethnic Albanian leaders from the province. Steps 1 to 6 reduce this text for searching to the following: Milosevic comm carri offic new agen Tanjug cast doubt govern talk interna commun call try prevent all-out war Serb province President Milosevic said well known Serbia Yugoslavia firm commit resolv problem Kosovo integr part Serbia peace Serbia particip representa ethnic commun Tanjug said Milosevic speak meeti British Foreign Secretary Robin Cook deliver ultimat attend negoti week time autonomy propos Kosovo ethnic Alban lead province Cook earl told conference Milosevic agree study propos.
the engines may drop these two steps. or proximity operators.-. conjunctions.must tokenize the query stream. Document processing shares many steps with query processing. or NOT. How each particular search engine creates a query representation depends on how the system does its matching. the search engine -.g. Steps 3 and 4: Stop list and stemming. Publicly available search engines usually choose time over very high quality. Step 2: Parsing. AND. since most publicly available search engines encourage very short queries. If it uses any Boolean logic. quotation marks) or reserved terms in specialized format (e. as evidenced in the size of query window provided. Since users may employ special operators in their query. Usually a token is defined as an alpha-numeric string that occurs between white space and/or punctuation. adjacency. the higher the quality of results. Thus. ordering). Good statistical queries should contain many synonyms and other terms in order to create a full representation. and Named Entities. having too many documents to search against.Query Processor Query processing has seven possible steps. though a system can cut these steps short and proceed to match the query to the inverted file at any of a number of places during the processing.g. These operators may occur in the form of reserved punctuation (e. OR. ∗Create query representation. this is the point at which the majority of publicly available search engines perform the search. If a Boolean matcher is utilized.. Step 5: Creating the query. the query processor will recognize the operators implicitly in the language used no matter how the operators might be expressed (e.-. As soon as a user inputs a query. ∗Compute weights. similar to the processes described above in the Document Processor section. i. However. In fact." However. Some search engines will go further and stop-list and stem the query. then the system must create logical sets of the terms connected by AND.whether a keyword-based system or a full natural language processing (NLP) system -. ∗Stem words. the system needs to parse the query first into query terms and operators... break it down into understandable segments. it will also recognize the logical operators from Step 2 and create a representation containing logical . ————————> Matcher ∗Expand query terms. "I'd like information about. -.. phrases.g. such as. then the query must match the statistical representations of the documents in the system. An NLP system will recognize single terms. At this point. prepositions. ————————> Matcher ∗Delete stop words. The steps in query processing are as follows (with the option to stop processing and start matching indicated as "Matcher"): ∗Tokenize query terms. If a statistically based matcher is used. OR).e.-. search system designers must choose what is most important to their users — time or quality. In the case of an NLP system. More steps and more documents make the process more expensive for processing in terms of computational resources and responsiveness. including Boolean.-. a search engine may take the list of query terms and search them against the inverted file. The stop list might also contain words from commonly occurring querying phrases. the longer the wait for results. special operators.-.--> Matcher Step 1: Tokenizing. Recognize query terms vs.-.
intermediaries might have used the same controlled vocabulary or thesaurus used by the indexers who assigned subject descriptors to documents. or specialized expansion facilities may take the initial query and enlarge it by adding associated vocabulary. weighted query is searched against the inverted file of documents. After this final step. most users seek information about an unfamiliar subject. except for very simple queries. At this point. we will only make some broad generalizations in the following description of the search and matching function. it also follows that the simpler the document representation. but some do an implicit weighting by treating the first term(s) in a query as having higher significance. a similarity score is computed between the query and each document/page based on the scoring algorithm used by the system. Back then. because research has shown that users are not particularly good at determining the relative importance of terms in their queries.sets of the terms to be AND'd. or all seven steps of query processing. rather than the exact query terms. Boolean model. This process approaches what search intermediaries did for end users in the earlier days of commercial search systems. Since making the distinctions between these models goes far beyond the goals of this article. so they may not know the correct terminology. OR'd. Since users of search engines usually include only a single statement of their information needs in a query. such as one-word. Therefore. Few search engines implement system-based query weighting. non-Boolean query matching is far simpler than when the model is an NLP-based query within a weighted. Step 7: Query term weighting (assuming more than one query term). no matter whether the search ends after the first two. the query representation. Second. They can't make this determination for several reasons. The final step in query processing involves computing weights for the terms in the query. Scoring algorithms rankings are based on the presence/absence of query term(s). or query term weights. five. The engines use this information to provide a list of documents/pages to the user. but rather. unweighted. Searching the inverted file for documents meeting the query requirements. or NOT'd. and document terms are weighted by being compared to the database as a whole. referred to simply as "matching. Search and Matching Function How systems carry out their search and matching functions differs according to which theoretical model of information retrieval underlies the system's design philosophy. While the computational processing required for simple. term frequency. More advanced search engines may take two further steps. Boolean logic fulfillment. on relations among documents or . resources such as WordNet are generally available. Having determined which subset of documents or pages matches the query requirements to some degree. the expanded. Today. they don't know what else exists in the database. Sometimes the user controls this step by indicating either how much to weight each term or simply which term or concept in the query matters most and must appear in each retrieved document to ensure relevance. a search engine may take the query representation and perform the search against the inverted file. more sophisticated systems may expand the query into all possible synonymous terms and perhaps even broader and narrower terms. Step 6: Query expansion. and the matching algorithm. Some search engines use scoring algorithms not based on document contents. Leaving the weighting up to the user is not common. First." is typically a standard binary search. it becomes highly probable that the information they need may be expressed using synonyms. the less relevant the results. tf/idf. in the documents which the search engine searches against. non-ambiguous queries seeking the most generally known information.
the system will then adjust its query representation to reflect this value-added feedback and re-run the search with the improved query to produce either a new set of documents or a simple reranking of documents from the initial search. but what features of a query make for good matches? Let's look at the key features and consider some pros and cons of their utility in helping to retrieve a good representation of documents/pages. Also. . Term frequency: How frequently a query term appears in a document is one of the most obvious ways of determining a document's relevance to a query. e. Similarly. First. but with the wrong meaning. the ranked results list goes to the user. such as education. as well as the richness of the document and query weighting mechanisms.g. who can then simply click and follow the system's internal pointers to the selected document/page. If either of these are available. More sophisticated systems will go even further at this stage and allow the user to provide some relevance feedback or to modify their query based on the results they have seen. Terms occurring in the title of a document or page that match a query term are therefore frequently weighted more heavily than terms occurring in the body of the document. While most often true. Search engines that don't use a tf/idf weighting algorithm do not appropriately down-weight the overly frequent terms." Many of the non-relevant documents presented to users result from matching the right word. Some studies show that the location — in which a term occurs in a document or on a page — indicates its significance to the document. common query terms such as "education" or "teaching" are so common and occur so frequently that an engine's ability to distinguish the relevant from the non-relevant in a collection declines sharply. However the search engine determines rank. query terms occurring in section headings or the first paragraph of a document may be more likely to be relevant. "earlychildhood. the system presents an ordered list to the user.. The sophistication of the ordering of the documents again depends on the model the system uses. What Document Features Make a Good Match to a Query We have discussed how search engines work.past retrieval history of documents/pages. For example. several situations can undermine this premise. nor are higher weights assigned to appropriate distinguishing (and less frequently-occurring) terms. After computing the similarity of each document in the subset of documents. in any order. in a document would produce a very different ranking than one by a search engine that performed linguistically correct phrasing for both document and query representation and that utilized the proven tf/idf weighting scheme. search engines that only require the presence of any alpha-numeric string from the query occurring anywhere." Location of terms: Many search engines give preference to words found in the title or lead paragraph or in the meta data of a document. in a collection of documents in a particular domain. many words have multiple meanings — they are polysemous. Think of words like "pool" or "fire.
Date of Publication: Some search engines assume that the more recent the information is. it is more likely that the document is relevant to the query than if the terms occur at greater distance. it assumes that the underlying information need remains the same. or things. While this may be useful. it is a factor when used to compute the relative merit of similar pages. They take a series of words and try to reduce them to their core meaning. While popularity is a good indicator at times. Popularity utilizes data on the frequency with which a page is chosen by all users as a means of predicting relevance. some search engines clearly rank documents in results higher if the query terms occur adjacent to one another or in closer proximity. nor do they have the capability of discerning between grammatically correct text and complete gibberish. So. in a choice between two documents both containing the same query terms. It's a nice thought. but first. then the search results may be peculiarly skewed. since so many searches are performed on people. when you were looking for pictures of Madonnas for an art history class. you pass by penalty free.those referred to by many other pages. places. Lets discuss the basics of generating content as well as some software used to do so. . we need to understand duplicate content. if the search engine assumes that you are searching for a name instead of the same word as a normal everyday term. Proximity of query terms: When the terms in a query occur near to each other within a document. Imagine getting information on "Madonna. Length: While length per se does not necessarily predict relevance. This of course will change over time as search engines evolve and the cost of hardware falls. As you saw in the above pages. They can't understand text." the rock star. the document that contains a proportionately higher occurrence of the term relative to the length of the document is assumed more likely to be relevant. The engines therefore present results beginning with the most recent to the less current. it's just too bad that it is completely wrong. search engines are simple test parsers. As long as you stay below the threshold. but we black hats will evolve as well always aiming to stay at least one step ahead. A widely passed around myth on web master forums is that duplicate content is viewed by search engines as a percentage. the more likely that it will be useful or relevant to the user. While some search engines do not recognize phrases per se in queries. as compared to documents in which the terms occur at a distance. Lets start with content. Summary Now that we have covered how a search engine works. we can discuss methods to take advantage of them. or have a high number of "in-links" Popularity: Google and several other search engines add popularity to link analysis to help determine the relevance or value of pages. Proper nouns sometimes have higher weights.
to.Duplicate Content I’ve read seemingly hundreds of forum posts discussing duplicate content. Simply saying that “by changing 25 percent of the text on a page it is no longer duplicate content” is not a true or accurate statement. Once this is done the following “shingles” are created from the above text. of. To gain some understanding we need to take a look at the k-shingle algorithm that may or may not be in use by the major search engines (my money is that it is in use). As duplicates are found a “duplicate content” score is assigned to the page. . The search engine can now compare this “fingerprint” to other pages in an attempt to find duplicate content. They also strip out all fill words. the. If too many “fingerprints” match other documents the score becomes high enough that the search engines flag the page as duplicate content thus sending it to supplemental hell or worse deleting it from their index completely. The first thing they do is strip out all stop words like and. I’ve seen the following used as an example so lets use it here as well. leaving me with more questions than answers. The shingling algorithm essentially finds word groups within a body of text in order to determine the uniqueness of the text. Lets examine why that is. none of which gave the full picture. leaving us only with action words which are considered the core of the content. Most people are under the assumption that duplicate content is looked at on the page level when in fact it is far more complex than that. (I'm going to include the stop words for simplicity) The swift brown fox swift brown fox jumped brown fox jumped over fox jumped over the jumped over the lazy over the lazy dog These are essentially like unique fingerprints that identify this block of text. Before we get to this point the search engine has already stripped all tags and HTML from the page leaving just this plain text behind for us to take a look at. Let’s suppose that you have a page that contains the following text: The swift brown fox jumped over the lazy dog. Here is what I have discovered. I decided to spend some time doing research to find out exactly what goes on behind the scenes.
You can't simply add generic stop words here and there and expect to fool anyone. What Makes A Good Content Generator? Now we understand how a search engine parses documents on the web. Think through every decision using logic and reasoning. Now it is time to check out some basic content generation techniques. Everything you do should be from the standpoint of a scientist. just raw data and numbers. The above gives us the following shingles: my old lady swears old lady swears that lady swears that she swears that she saw that she saw the she saw the lazy saw the lazy dog the lazy dog jump lazy dog jump over dog jump over the jump over the swift over the swift brown the swift brown fox Comparing these two sets of shingles we can see that only one matches (”the swift brown fox“). but some thorough testing would sure narrow it down . It’s the “action” words that should be the focus. There is no magic involved in SEO. Remember. . we're dealing with a computer algorithm here. Then again there may be other mechanisms at work that we can’t yet see rendering that impossible as well. So what can we take away from the above examples? First and foremost we quickly begin to realize that duplicate content is far more difficult than saying “document A and document B are 50 percent similar”. I suggest experimenting and finding what works for you in your situation. Always split test and perform controlled experiments. Thus it is unlikely that these two documents are duplicates of one another. Second we can see that people adding “stop words” and “filler words” to avoid duplicate content are largely wasting their time. Changing action words without altering the meaning of a body of text may very well be enough to get past these algorithms. we also understand the intricacies of duplicate content and what it takes to avoid it.My old lady swears that she saw the lazy dog jump over the swift brown fox. No one but Google knows what the percentage match must be for these two documents to be considered duplicates.). The last paragraph here is the real important part when generating content. not some supernatural power.
the pages built last a long time. The idea is that the search engine can process those words. The algorithm takes each word in a body of content and changes the order based on the algorithm. Unless things change drastically I would avoid this one. LSI stands for Latent Semantic Indexing. but this isn't 1999. ranking for a keyword phrase is no longer as simple as having content that talks about and repeats the target keyword phrase over and over like the good old days. The content engine doesn't do anything to address LSI.One of the more commonly used text spinners is known as Markov. The quality of the output really depends on the quality of the input. we've talked about the old methods of doing things. you can't fool the search engines by simply repeating a keyword over and over in the body of your pages (I wish it were still that easy). LSI is becoming more and more important. it's actually something called a Markov Chain which was developed by mathematician Andrey Markov. It's also one of the most out of date. This produces largely unique text. I know. So if Markov is easy to detect and LSI is starting to become more important. but there are FAR better packages out there. SEC (Search Engine Cloaker): Another well known paid script. It simply splices unrelated sentences together from random sources while tossing in your keyword randomly. Germany. and which doesn't? Software Fantomaster Shadowmaker: This is probably one of the oldest and most commonly known high end cloaking packages being sold. Manhattan Project.000. I'm being harsh. but it's also typically VERY unreadable. So what works today? Now and in the future. The other issue with Markov is the fact that it will likely never pass a human review for readability. we're lazy! The other gripe is the ip cloaking. which software works. It takes days just to setup a few decent pages. They are worth taking a look at just to understand the fundamentals. Some people may be able to get around this by replacing words in the content with synonyms. If you don't shuffle the Markov chains enough you also run into duplicate content issues because of the nature of shingling as discussed earlier. Now we need to make sure we have other key phrases that the search engine thinks are related to the main key phrase.00 you basically get a clunky outdated interface for slowly building HTML pages. That in itself isn't very black hat. So. and Theory of Relativity. LSI is basically just a process by which a search engine can infer the meaning of a page based on the content of that page. . The software is SLOW. If you understand SEO and have the time to dedicate to creating the content. I personally stopped using Markov back in 2006 or 2007 after developing my own proprietary content engine. Remember. Some popular software that uses Markov chains include RSSGM and YAGC both of which are pretty old and outdated at this point. For example. This one is of good quality and with work does provide results. For $3. find relational data and determine that the page is about Albert Einstein. Their ip list is terribly out of date only containing a couple thousand ip's as of this writing. The content engine is mostly manual making you build sentences which are then mixed together for your content. I do have two complaints. So. It sounds complicated. Markov isn't actually intended for content generation. lets say they index a page and find words like atomic bomb. but I was really let down by this software. but it really isn't.
use ip cloaking. So. Blog indexing services setup a protocol in which a web site can send a ping whenever new pages are added to a blog. The most difficult part of ip cloaking is compiling a list of known search engine ip's. but that is a very difficult and time consuming thing to pull of. The use of the meta tag noarchive in your pages forces the search engines to show no cached copy of your page in the search results. what else can we do if user agents are so easy to spoof? IP Cloaking Every visitor to your web site must first establish a connection with an ip address. . This would allow you to connect as though you were a search engine bot. This also means that we don't rely on the user agent at all. This cache is going to contain the page as the search engine bot saw it at indexing time. See the power and potential here? So how can we detect ip cloaking? Every major search engine maintains a cache of the pages it indexes. so you are still out of luck. When a human visits that same page I can show an ad. but in the last couple years it has lost most of its power as far as getting pages to rank. only lazy! As we build pages. It is very difficult to detect and by far the most solid option. but the problem here is that the data for the page would be sent to the ip you are spoofing which isn't on your computer. Black Hats exploit this by writing scripts that send out massive numbers of pings to various services in order to entice bots to crawl their pages.script think that I am a Google search engine bot. so you avoid snooping web masters. The lesson here? If you are serious about this. Blog ping: This one is quite old. or simply to add as a link in their blog directory. Luckily software like Blog Cloaker and SSEC already does this for us. For example. They can then send over a bot that grabs the page content for indexing and searching. so there is no way to circumvent ip based cloaking (although some caution must be taken as we will discuss). Every search engine crawler must identify itself with a unique signature viewable by reverse dns lookup. or an affiliate product so I can make some money. The only other method of detection involves ip spoofing. Lets discuss some common and not so common methods for doing so. These ip addresses resolve to dns servers which in turn identify the origin of that visitor. This means we have a sure fire method for identifying and cloaking based on ip address. Black Hats are Basically White Hats. but still widely used. Once we have that information. we can then show different pages to different users based on the ip they visit our page with. thus rendering your cloaking completely useless. I can show a search engine bot a keyword targeted page full of key phrases related to what I want to rank for. This means your competition can view your cloaked page by clicking on the cache in the SERPS. Link Building As we discussed earlier. That's ok. This method certainly drives the bots. we also need links to get those pages to rank. Basically you configure a computer to act as if it is using one of Google's ip's when it visits a page. it's easy to get around that.
These are pretty easy to detect because of the limited range of ip's involved. etc. then back to A. Money Making Strategies We now have a solid understanding of cloaking. Page A links to page B. Forums and Guest books: The internet contains millions of forums and guest books all ripe for the picking. The effectiveness of this approach has diminished over time. Most blogs these days have software in place that greatly limits or even eliminates trackback spam. Software packages like Xrumer made this a VERY popular way to gather back links. trackbacks are basically a method in which one blog can tell another blog that it has posted something related to or in response to an existing blog post. So much so that most forums have methods in place to detect and reject these types of links. We're talking about abandoned forums. So how do you pull all of it together to make some money? . but it takes some more creativity. It doesn't take much processing to figure out that there are only a few people involved with all of the links. While most forums are heavily moderated (at least the active ones). but that still have public access. page B links to page C. As a black hat. software to avoid. This gave a HUGE boost to rankings and made some very lucky Viagra spammers millions of dollars. Link Networks: Also known as link farms. the key here is to have a very diverse pool of links. so it would be almost impossible to detect.Trackback: Another method of communication used by blogs. So. EDU links: A couple years ago Black Hats noticed an odd trend. We took advantage of that by posting millions of links to our pages on these abandoned sites. that still leaves you with thousands in which you can drop links where no one will likely notice or even care. Some people still use them and are still successful. Most are very simplistic in nature. Putting up a post related to the topic on the forum and dropping your link In the BB code for a smiley for example. content generation. how a search engine works. Universities and government agencies with very high ranking web sites often times have very old message boards they have long forgotten about. old guest books. these have been popular for years. software that is pure gold and even link building strategies. Now. A search engine would have to discount links completely in order to filter these links out. you can get links dropped on active forums as well. but it's still a viable tool. Take a look at Link Exchange for example. They have over 300 servers all over the world with thousands of ip's. we see that as an opportunity to inject links to thousands of our own pages by automating the process and sending out trackbacks to as many blogs as we can.
I would like to thank you for taking the time to read this. then send all of your doorway/cloaked traffic to the index page. Either building your own (which we are far too lazy to do). So. Direct Linking is where you setup your cloaked pages with all of your product keywords. but as I said. . The most difficult part of affiliate marketing is getting well qualified targeted traffic. Be sure to register and post on the forums if you have any comments or questions. and it automates the difficult tasks we all hate. then redirect straight to the merchant or affiliates sales page. You load up your money keyword list. Black Hat Marketing isn't all that different from White Hat marketing. that's where Landing Pages come in.com . The Landing Page Builder shows the best possible page with ads based on what the incoming user searched for. some affiliates don't allow Direct Linking. I plan to update often. or by using something like Landing Page Builder which automates everything for us.blackhat360. That again is where good software and cloaking comes into play.he traffic you send it. Some networks and affiliates allow direct linking. again. This often results in the highest conversion rates. Couldn't be easier. Affiliate Marketing: We all know what an affiliate program is. There are literally tens of thousands of affiliate programs with millions of products to sell. In the mean time. We automate the difficult and time consuming tasks so we can focus on the important tasks at hand. Landing pages give us a place to send and clean our traffic. they also prequalify the buyer and make sure the quality of the traffic sent to the affiliate is as high as possible. you can follow me on my personal blog over at http://www. but we also want to keep a strong relationship with the affiliate so we can get paid. we want to make money. setup a template with your ads or offers. Conclusion As we can see. After all.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.