Sponsored Content
Top Forums Web Development Removing VBSEO for vbulletin – Reverting back to vbulletin URLs Post 302868409 by Neo on Sunday 27th of October 2013 06:03:15 PM
Old 10-27-2013
I advise you first find a way to completely remove vBSEO and at the same time, rewrite all your old vBSEO URLs to the original vB URLs....

When I recently did this, I wrote over 100 mod_rewrite rules and all this work, removing vBSEO and doing all the log file checking to clean up 404s buy writing rewrite rules to 301 back to the original vB URLs took me around 32 to 40 hours of work.

The reason is that you should closely monitor your access.log for 404 errors and at the same time, keep an eye on webmaster tools for crawl errors.

If you don't know how to do this, and are not comfortable with writing mod_rewrite rules, I suggest you hire a professional to do it for you.

This should be done in a controlled, step-by-step way... and after you get your original vB URLs working without 404 errors, and all is well, you can consider rewriting your URLs to more "friendly" URLs... that is step two. Step one is to completely remove vBSEO and revert to the standard vB URLs without taking an SEO hit.
 

We Also Found This Discussion For You

1. Post Here to Contact Site Administrators and Moderators

vbulletin addon for ads?

i'm wondering what vbulletin addon is used here to manage ads if the admin could let me know :) (1 Reply)
Discussion started by: disgust
1 Replies
SKIPFISH(1)						      General Commands Manual						       SKIPFISH(1)

NAME
skipfish - active web application security reconnaissance tool SYNOPSIS
skipfish [options] -W wordlist -o output-directory start-url [start-url2 ...] DESCRIPTION
skipfish is an active web application security reconnaissance tool. It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and dictionary-based probes. The resulting map is then annotated with the output from a number of active (but hope- fully non-disruptive) security checks. The final report generated by the tool is meant to serve as a foundation for professional web application security assessments. OPTIONS
Authentication and access options: -A user:pass use specified HTTP authentication credentials -F host=IP pretend that 'host' resolves to 'IP' -C name=val append a custom cookie to all requests -H name=val append a custom HTTP header to all requests -b (i|f|p) use headers consistent with MSIE / Firefox / iPhone -N do not accept any new cookies Crawl scope options: -d max_depth maximum crawl tree depth (default: 16) -c max_child maximum children to index per node (default: 512) -x max_desc maximum descendants to index per crawl tree branch (default: 8192) -r r_limit max total number of requests to send (default: 100000000) -p crawl% node and link crawl probability (default: 100%) -q hex repeat a scan with a particular random seed -I string only follow URLs matching 'string' -X string exclude URLs matching 'string' -K string do not fuzz query parameters or form fields named 'string' -Z do not descend into directories that return HTTP 500 code -D domain also crawl cross-site links to a specified domain -B domain trust, but do not crawl, content included from a third-party domain -O do not submit any forms -P do not parse HTML and other documents to find new links Reporting options: -o dir write output to specified directory (required) -M log warnings about mixed content or non-SSL password forms -E log all HTTP/1.0 / HTTP/1.1 caching intent mismatches -U log all external URLs and e-mails seen -Q completely suppress duplicate nodes in reports -u be quiet, do not display realtime scan statistics Dictionary management options: -S wordlist load a specified read-only wordlist for brute-force tests -W wordlist load a specified read-write wordlist for any site-specific learned words. This option is required but the specified file can be empty, to store the newly learned words and alternatively, you can use -W- to discard new words. -L do not auto-learn new keywords for the site -Y do not fuzz extensions during most directory brute-force steps -R age purge words that resulted in a hit more than 'age' scans ago -T name=val add new form auto-fill rule -G max_guess maximum number of keyword guesses to keep in the jar (default: 256) Performance settings: -l max_req max requests per second (0 = unlimited) -g max_conn maximum simultaneous TCP connections, global (default: 50) -m host_conn maximum simultaneous connections, per target IP (default: 10) -f max_fail maximum number of consecutive HTTP errors to accept (default: 100) -t req_tmout total request response timeout (default: 20 s) -w rw_tmout individual network I/O timeout (default: 10 s) -i idle_tmout timeout on idle HTTP connections (default: 10 s) -s s_limit response size limit (default: 200000 B) -e do not keep binary responses for reporting Performance settings: -k duration stop scanning after the given duration (format: h:m:s) AUTHOR
skipfish was written by Michal Zalewski <lcamtuf@google.com>, with contributions from Niels Heinen <heinenn@google.com>, Sebastian Roschke <s.roschke@googlemail.com>, and other parties. This manual page was written by Thorsten Schifferdecker <tsd@debian.systs.org>, for the Debian project (and may be used by others). March 23, 2010 SKIPFISH(1)
All times are GMT -4. The time now is 03:38 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy