Forum Discussion

Mark_Curole's avatar
Mark_Curole
Icon for Nimbostratus rankNimbostratus
Sep 05, 2008

Rule to Help Migrating Content from Old web servers to new servers

My company is migrating our main Intranet page from an old system to a content management system. The old system is large and has deep branches that whose content is managed by many different groups using a lot of tools. The new system will be using a content management system similar to Share Point. We are wanting to retain the same host name for the site since it is well known, and we have no idea what users have bookmarked in the site.

Using iRules to handle load balancing requests between pools based on paths is something we have done many times before, but this application brought about some specific issues that took some special coding. The main problem is that while the developers could tell me when path /abc would be moved to the new content system, they were unsure about how to deal with bookmarks to "deep" links inside /abc that was on the old system. These deep link would bring about 404s from the content management system, and they wanted the request to be directed to the old content.

So I have pieced together several different forum posts to develop an iRule to handle just that. The rule is written based on the early stages of the migration where most of the content will be on the old servers. That said one of the first pieces of info to move over will be the landing page of the root. So I am send request for content in the root of the site and request that match a data group to the NewPool. All other request go to the OldPool. If a request to the NewPool gets a 404 then it is sent using the HTTP::retry command to the OldPool. One item of particular issue was dealing with POST data.

As of now my rule is working as I like it, but it is fairly complex, and I am not very proficient with TCL so I am looking for any advise on making the rule as efficient and robust as possible.

 
 when RULE_INIT { 
 set ::DEBUG 0 
  
 if { $::DEBUG } { log "Rule " } 
 } 
  
 when CLIENT_ACCEPTED { 
  i is to keep count of the number of times we reissue a request. This is to prevent infinite loops. 
 set i 0 
  RetryRequest is a flag to indicate if the request is a retry and should go to the default pool. 
 set RetryRequest 0 
 } 
  
 when HTTP_REQUEST { 
  
 if { $::DEBUG } {  
 log "reqpath -> '[string tolower [HTTP::path]]', RetryRequest $RetryRequest" 
 } 
  
 save the original request off in case we need to retry 
 set request [HTTP::request] 
  
 if { $::DEBUG } {  
 log "request -> [HTTP::request]" 
 } 
  
  
 RetryFlag is a flag to indicate if the request is eligible to be retried. 
 if { $RetryRequest } { 
 pool OldPool 
 set RetryFlag 0 
 } else { 
  
 if { [URI::path [HTTP::uri] depth] == 0 || 
  [matchclass [string tolower [HTTP::path]] starts_with $::NewPool_Paths] } { 
 pool NewPool 
 set RetryFlag 1 
 if {[HTTP::header exists "Content-Length"]} { 
 set content_length [HTTP::header "Content-Length"] 
 if {$content_length == 0} { 
 set content_length 4294967295 
 } 
 if { $::DEBUG } { 
 log "There is request data, we need to collect it. Content Length -> $content_length" 
 } 
  
 HTTP::collect $content_length 
 } 
 } else { 
 pool OldPool 
 set RetryFlag 0 
 } 
 } 
  
 set depth [URI::path [HTTP::uri] depth] 
 set RetryRequest 0 
 } 
  
 when HTTP_REQUEST_DATA { 
  
 append request [HTTP::payload [HTTP::payload length]] 
 if { $::DEBUG } {  
 log "full request -> $request" 
 } 
  
 } 
  
 when HTTP_RESPONSE { 
  
 if { $::DEBUG } { 
 log "Response status [HTTP::status], request count $i, Pool [LB::server pool]" 
 } 
  
  We will only retry 404 request and requests with the retry flag, plus we use i to prevent infinite loops 
 if {[HTTP::status] == 404 && $RetryFlag && $i < 2} { 
 set RetryRequest 1 
 incr i 
 HTTP::retry $request 
 } else { 
 set i 0  
 } 
 } 
 

2 Replies

  • Does it work to append the payload to the request and then retry it? I don't think HTTP::retry will allow you to resend a request with post data (as nmenant noted in this post Click here).

     

     

    I think it would be more efficient to redirect clients to the new location instead of having LTM send the request itself. Of course, if the original request was a POST with data, the post data would be lost. Though I believe that's already the case with HTTP::retry.

     

     

    Aaron

     

  • I have a simple dotNet application that I used to test this on 9.3.1, and the post seems to work just fine - before adding the logic the application would stall with a 400 error because of the lack of the POST data.