headermask image

header image

category archive listing Category Archives: Tips & Tricks

Tips for setting up Squid in Reverse Proxy (Web Accelerator ‘accel’) mode

I have multiple web servers behind one IP address and use port forwarding to each allowing external access. The main web server is externally accessible on port 80, but the others require manual port selection in a URL. This is annoying to say the least, and potentially problematic if a surfer is restricted only to port 80.

The solution is to use Squid in reverse proxy, or web/http accelerator, mode. The turns Squid on its head, serving requests from the outside and fetching content from internal servers. In effect, it listens on external port 80 and connects to the correct internal web server depending on the Host header in the HTTP request.

Depending on your requirements there are a few issues to take into account. In my particular case, I did not want Squid to perform any caching – it is purely for routing external requests. The other major requirement is that logging of external IP addresses should still work in each web server’s access log. This is easily achieved with a bit of additional configuration.

Your complete squid.conf should be made up of the following:

http_port 8080 vhost

I port forward external port 80 to internal port 8080, as there is already a web server running on port 80 on the same machine (it is generally not a good idea to have Squid and Apache on the same machine, but I’m not caching to disk and my site isn’t that popular). An additional ‘default_site=<host>’ option can be added if you want to handle requests that do not specify a particular Host.

cache_peer <Web server IP> parent 80 0 no-query originserver no-digest login=PASS name=<Any name for later reference>

‘no-query’ turns of ICP requests, ‘originserver’ tells Squid this is a web server, ‘no-digest’ turns off Squid’s annoying requests to ’squid-internal-periodic’ on each web server (which would never be fulfilled), ‘login=PASS’ allows basic authentication to be passed to the web servers (it tells Squid to trust them, and not strip the Authorization or WWW-Authenticate headers from requests), and ‘name’ is just a name we use later (I use ‘<hostname>_<server type>’).

Then for each Host/domain you wish to forward:

acl <acl name> dstdomain <Host/domain>
http_access allow <acl name>
cache_peer_access <cache_peer name> allow <acl name>
[Remaining acl's, if any.]
cache_peer_access <cache_peer name> deny all

A nice addition for testing is:

acl LocalWWW dstdomain <internal test hostname>
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
http_access allow LocalWWW localnet
[Place amongst cache_peer_access lines for default web server, before 'deny all' line.]
cache_peer_access <cache_peer name of default server> allow LocalWWW

This allows you to test Squid on your internal network by requesting a page from an internal hostname (e.g. if you run an internal DNS server). This is obviously inaccessible from the outside world.

To tighten things up, also add:

acl localhost src 127.0.0.1/32
acl manager proto cache_object
acl Safe_ports port 80 # http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT
http_access deny all

Make sure the last line is the last of all the ‘http_access’ lines!

This is where the tricks begin:

X-Forwarded-For is the de facto HTTP request Header for notifying upstream servers that the request has been forwarded by a proxy server. This is the key to enabling logging of external IP addresses by the web server: instead of logging the IP address of the client connection (which will always be that of Squid’s machine), it will use the value in this header. This can be easily done in Apache by:

LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined_forward_for

There is one problem: if the request has already come from a proxy and adds X-Forward-For to the request header before arriving at this Squid server, this Squid instance will add this machine’s IP to the header (as per the spec), which will result in a list of IP addresses. If the above ‘LogFormat’ is used as-is, log entries will be invalid (causing log analysers to reject such lines). This is not what we want. To work around this, add the following to squid.conf:

header_replace X-Forwarded-For

This instructs Squid to replace an incoming request’s X-Forward-For header with nothing (i.e. remove it). Luckily, this is done before Squid adds its own entry to X-Forward-For, which means there will always be only one IP address in the header when the request is forwarded on.

(If you wish to skip the remainder of the Squid configuration tips and proceed directly to the rest of logging in Apache, click here.)

Next, you probably don’t want Via, X-Cache and X-Squid headers to be sent back to clients, as you would like Squid’s presence to be (mostly) invisible to the outside world. You can add the following to instruct Squid not to add such headers when replying to external requests (they are still added for internal network requests, which can help with debugging):

via off
reply_header_access X-Cache-Lookup deny !localnet
reply_header_access X-Squid-Error deny !localnet
reply_header_access X-Cache deny !localnet

To avoid caching anything (just act as a request router):

cache deny all

To conserve memory:

memory_pools off

If you are already running squid on the same machine in normal mode (i.e. serving internal clients), then specify a different SNMP port (if you’re using SNMP on the original instance):

acl snmppublic snmp_community public
snmp_access allow snmppublic localnet
snmp_access deny all
snmp_port 3402 # +1 from default

If you use ICP and have an existing instance, set a different port:

icp_port 3131 # +1 from default

To speed up restarts while testing:

shutdown_lifetime 0 seconds

Don’t forget your log:

access_log /var/log/squid3/access.log squid

And finally:

cache_mgr <spam-safe email address, or something made-up if you don't wish to be contacted>
cachemgr_passwd <password>

That’s it for Squid. Now to Apache:

In addition to the custom ‘LogFormat’ specified above, you can add this to a new file in the ‘conf.d’ directory:

SetEnvIf Remote_Addr        "^127\.0\.0\.1$"    local_network
SetEnvIf Remote_Addr        "^192\.168\."       local_network
SetEnvIf X-Forwarded-For    "^$"                normal_request
SetEnvIf X-Forwarded-For    ".+"                via_accel

Then in each virtual host (’VirtualHost’) add:

RewriteEngine    On
RewriteCond    %{ENV:local_network}    =1
RewriteCond    %{ENV:via_accel}        !=1
RewriteRule    .*                      -       [env=local_request:1]

These directives will set the ‘local_request’ variable whenever a direct request is made to the webserver on the internal network. This is good for restricting directory/page access to internal clients, but not requests forwarded through Squid (which would appear as a local network client if only using IP address restriction). An example is:

<Location /blah>
Order Allow,Deny
Allow from env=local_request
</Location>

Now the grand finale: logging. Just use the following:

CustomLog /var/log/apache2/access.log combined_forward_for env=via_accel
CustomLog /var/log/apache2/access.log combined env=normal_request

The same file can be used over multiple ‘CustomLog’ lines, and the appropriate line is chosen depending on whether the request is direct or forwarded through Squid. Either way, the origin IP address is logged.

If you use IIS, then this ISAPI DLL is for you. It will detect the X-Forwarded-For header and change the request address server variable to the X-Forwarded-For value (I do not know how it handles the IP address list problem). Remember to have it execute first in the filter list.

A good place to visit when getting started with Squid configuration is their wiki. SNMP information can be found here.

This is a good page on understanding the refresh_pattern directive.

CamStudio, Adobe Premiere, VirtualDub, Lossless CSCD and LZOCodec

As far as Windows-based decent free screen-recording software goes, CamStudio is it. When installing, make sure you also download and install the CamStudio Lossless Codec (CSCD)! There are a couple of problems that become apparent when actually using captured video in other video processing apps (e.g. Media Player Classic (MPC), Adobe Premiere and VirtualDub). The following list assumes video is compressed with CSCD, unless otherwise noted.

  1. Under Vista, the ‘Record audio from speakers’ option does not work.
  2. When opening a captured AVI with MPC, no video is shown (but audio is heard if it was recorded).
  3. When importing a captured AVI into Adobe Premiere, all frames of video are blank and only the cursor may, or may not, be seen. Also the following message is found in the event window: “File importer detected an inconsistency in the file structure of <captured.avi>. Reading and writing this file’s metadata (XMP) has been disabled.”
  4. If using the LZOCodec instead of CSCD, VirtualDub will crash if you play the captured AVI or attempt to transcode it (funnily enough, stepping through frames will not trigger the crash).
  5. If audio is captured, frames of audio are heard to be missing when playing back the AVI in any program.
  6. If/once the correct video is seen in Premiere, playback shows jittery, blending of frames and incorrect colours appearing. The base video data seems to be there, but Premiere is not interpreting it correctly and so it appears corrupt.

I recommend the following modifications to resolve the above problems, respectively:

  1. Stick with ‘Record audio from microphone’, but change the ‘Audio Capture Device’ (in ‘Audio Options for Microphone’) to whatever line represents your soundcard’s output mix (e.g. ‘Stereo Mix’). Then adjust the output levels of you soundcard and applications accordingly using the Windows Mixer.
  2. MPC appears to be quite finicky regarding an AVI’s RIFF tree. For some reason, CamStudio inserts additional data after the AVI Legacy Index at the end of the file (I’m not using the term ‘junk’ because JUNK tags are actually valid RIFF tags!). You can see the garbage for yourself by using VirtualDub’s in-built hex editor and opening the RIFF chunk tree – there’ll be an ‘invalid chunk’ after the top-level AVI chunk tree. Although VirtualDub, Windows Media Player and Adobe Premiere can read the files, MPC decides to hide the video. To create a garbage-less AVI, open the AVI in VirtualDub and perform a straight audio and video stream-copy to a new AVI. The new version will be identical to the old, minus the garbage at the end. MPC will successfully playback the video in the new version. This also allows Premiere to read the AVI and not report the ‘inconsistent file structure’ message mentioned in point 3 above.
  3. As was discovered here, Adobe Premiere misinterprets the alpha channel in the captured AVI. To rectify this, right-click on the clip in the bin, select ‘Interpret Footage…’ and enable the checkbox that instructs Premiere to ignore the alpha channel. Upon hitting OK, the proper frame will be shown.
  4. As suggested here, the LZOCodec is an alternative to CSCD. As I discovered, it does have a lot of nice options, including a console debug output and new archival compression option. However the fact that it causes VirtualDub to crash (curiously not MPC, the Windows thumbnail extractor, nor Adobe Premiere) is a serious downer. I have found no workaround – the obviously solution would be to get the source and debug. Until the problem is isolated, I would recommend staying with CSCD if you like VirtualDub.
  5. Losing audio data is a major pain. I found that changing the ‘Compressed Format’ to plain old PCM and interleaving the video and audio every 1 frame (instead of 100 or 500 milliseconds) checking ‘Use MCI Recording’ offered the best result (the interleaving settings did not make a difference with MCI enabled). During limited testing, I couldn’t discern any dropouts with the new settings.
  6. Adobe Premiere, as it turns out, has trouble dealing with certain codecs (in this case CSCD, but interesting not LZOCodec) that store delta frames between keyframes. Delta frames are great because they tend not to don’t duplicate redundant data by using information from previous frames, and therefore decrease the size of the captured AVI file. However when your video editor doesn’t like the way your codec is decompressing them, you don’t have much of a choice except to disable delta frames altogether. This is easily done in CamStudio by changing the ‘Set Key Frames Every’ setting in ‘Video Options’ to 1 (i.e. create a keyframe on every frame). You’ll need to uncheck the ‘Auto Adjust’ option at the bottom of this dialog and specify a capture rate. The default is 200 frames per second, however I use 25 FPS (i.e. capture frames every 40 milliseconds). This will increase the overall filesize, but satisfy Premiere, which will playback the video correctly. Premiere also says the captured AVIs appear to have dropped frames (in a clip’s properties), but this can safely ignored. Perhaps it is the existence of delta frames and empty frames in a file that confuses Premiere, or perhaps it is solely due to a bug in CSCD. Either way, not using deltas will work.

Using the Visual Studio 2005 IDE with the Visual Studio 2008 VC compiler & VC++ libraries

Having recently upgraded to Visual Studio 2008, my lackluster experience with developing and debugging native code in the ‘improved’ IDE has left me looking (and I am still) for hotfixes that might patch various painful problems. In particular, the IDE suffers major performance problems when debugging large C++ projects that (believe it or not) have breakpoints set. With a little experimentation I discovered that actually have any breakpoints at all causes a massive performance hit when single-stepping through one’s code, whether or not they are disabled. This is not a good thing for a debugger and its GUI, which is a real shame since Visual Studio 2005 was not plagued by any such issues.

This leads me to the following idea: why not keep using the Visual Studio 2005 IDE for development and debugging, but make it use the VC 2008 compiler and newer libraries (i.e. MFC 9 and the VC++ 2008 redistributables)? This will mean that Visual Studio 2005 will build your projects with the newer compiler and link to the latest versions of the runtime/MFC/etc. Since the 2005 debugger will work for either version (i.e. 2005- or 2008-compiled code), one can safely revert back to Visual Studio 2005 for such simple activities as single-stepping (phew!). I must point out that one good think Microsoft did for Visual Studio 2008 was dramatically improve their performance analysis tools – this would be the only reason to use the 2008 IDE: the newer profiling tools/GUI.

The first thing to do is to identify the full path of Visual Studio 2008’s ‘VC’ directory, and where version ‘6.0A’ of the Platform SDK resides. By default they are (respectively):

C:\Program Files\Microsoft Visual Studio 9.0\VC
C:\Program Files\Microsoft SDKs\Windows\v6.0A

Next you need to create a property sheet that contains the necessary information for a VS 2005 C/C++ project to use the 2008 headers and libraries.
Create a new property sheet file manuall (e.g. “VS9.vsprops”) and insert the following, adjusting the paths where necessary:

<?xml version="1.0" encoding="Windows-1252"?>
<VisualStudioPropertySheet
 ProjectType="Visual C++"
 Version="8.00"
 Name="VS9"
 >
 <Tool
 Name="VCCLCompilerTool"
 AdditionalIncludeDirectories="&quot;C:\Program Files\Microsoft Visual Studio 9.0\VC\include&quot;;&quot;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\include&quot;;&quot;C:\Program Files\Microsoft SDKs\Windows\v6.0A&quot;"
 />
 <Tool
 Name="VCLinkerTool"
 AdditionalLibraryDirectories="&quot;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib&quot;;&quot;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib&quot;;&quot;C:\Program Files\Microsoft SDKs\Windows\v6.0A\Lib&quot;;&quot;C:\Program Files\Microsoft Visual Studio 9.0&quot;"
 />
 <Tool
 Name="VCResourceCompilerTool"
 AdditionalIncludeDirectories="&quot;C:\Program Files\Microsoft Visual Studio 9.0\VC\include&quot;;&quot;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\include&quot;;&quot;C:\Program Files\Microsoft SDKs\Windows\v6.0A\Include&quot;"
 IgnoreStandardIncludePath="true"
 />
</VisualStudioPropertySheet>

Then in Visual Studio 2005, open the Property Manager (via the View menu), and add this new file. Make sure in the Project Settings, under General, the property “Inherited Project Property Sheets” includes the new file.

The last trick is to add the following paths to the “Executable Files” list in VS 2005 under Tools -> Options -> Projects and Solutions -> VC++ Directories:

C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin
C:\Program Files\Microsoft Visual Studio 9.0\VC\bin
C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE

The first is for RC.exe, the second is the compiler and the third is for the linker to find DLLs for manipulating debug information (e.g. mspdb80.dll).
NOTE: They don’t have to be added at the very top of the list (e.g. the NASM directory might be at the top) but they must come before the VS 2005-versions of the same directories. That is, they should come before the following since they are supposed to override them (that said, you shouldn’t have to remove any entires if the order is correct):

$(VCInstallDir)bin
$(VSInstallDir)Common7\Tools\bin
$(VSInstallDir)Common7\ide

Also, note: once you put these new paths in, ALL projects compiled under VS 2005 will use the VC 2008 compiler and tools. Unfortunately there is no clean way to override the compiler/tool paths using the VSProps method above. I tried creating a user-defined macro in the VSProps named ‘VSInstallDir’ to override the in-built one, but it had no effect. Although I haven’t tested it, there shouldn’t be a problem if you use the new compiler on VS 2005 projects without the inherited VSProps – you’ll just have a newer compiler linking to MFC 8 and the VC 2005 runtime, which shouldn’t (hopefully) cause any dramas!

iPhone Exchange Push Email (ActiveSync) only operates when connected via 3G, not WiFi, when mail server’s IP address is not externally-routable

I host my own mail server (Courier with Spamassassin) and check my mail via IMAP on my iPhone. Unfortunately Apple never implemented use of the IDLE command to enable Push with an IMAP account (there is a jailbroken app that will do this!), so the alternative (if one is desperate for Push) is to pipe their email to a service that is supported (Yahoo, and now apparently Gmail), or set up Microsoft Exchange. Me, being me, I decided to try something new, and set about setting up Exchange. The first step, of course, is installing Windows Server (I decided to do a fresh install of 2008 Datacenter Edition), setting up a domain controller/forest/domain, and then Exchange, which I actually installed on another machine that runs Windows Server 2003 R2. The only issue is that Microsoft has not release a supported version of Exchange Server 2007 for 32-bit x86 machines! The only option is their 64-bit build, and as I don’t have any 64-bit server machines (my servers are a tad old) I had to make do with Exchange 2003 with Service Pack 2 (SP2 includes the additional functionality to enable Push, specifically ActiveSync).

After it all appeared to be installed and happy, my iPhone would not beep, no matter how many emails I sent myself. As it turns out, the entire time (of course) it was connected to my WiFi network. As soon as I disabled WiFi on the device, and it connected to my server via the 3G network, I noticed the “IP-AUTD Initialized” message in the server’s event log. Hurrah!

The reason? Have a look at point #2 in: iPhone 2.0 software: Troubleshooting iPhone or iPod touch Exchange ActiveSync “Push” issues (thank you very much to Tonicwater, see below). Perhaps the iPhone could check for a network environment change (i.e. logging on a WiFi network) and flush the DNS cache – at least for mail/corporate-related activities, such as the address of one’s registered Exchange servers. Then they would always be current, and the ‘push’ process could be re-initiated with the same server at its newfound address.

The final step was to forward mail from my Linux mail server to Exchange. This was accomplished by adding the following code to the end of Courier’s maildroprc file:

if ($LOGNAME eq "(my Courier email address)" || $USER eq "balint")
{
  cc "!(my Exchange email address)"
}

I also added an Exchange recipient policy so that it would only keep the last 30 days of messages, which doubles as a backup of sorts.

Here is a useful page to help with diagnosing problems with Push in Exchange: http://msexchangeteam.com/archive/2006/04/03/424028.aspx

Error when instrumenting files for profiling using Visual Studio Performance Tools

A while ago I started using the Performance Tools that came bundled with Visual Studio 2005 Team Suite to analyse my code. There are two analysis methods: sampling and instrumentation. Although instrumentation has a higher overhead, it can reveal in greater detail exactly what your program is doing. In order for instrumentation to proceed, one must link with the /PROFILE switch and then use VSInstr to insert counters into one’s code. To cut down the cruft collected during a profiling session, one can use the VSInstr switches to dictate exactly which functions should be instrumented and/or use the VSPerf API to command the profiling engine to start/stop/suspend/resume/mark data collection during a run.

One fatal flaw of the Performance Explorer in the 2005 IDE was that one couldn’t start a program with data collection in the stopped state. It would always start collecting data from the very beginning of execution. Visual Studio 2008 includes a command to “Launch with Profiling Paused”, which is a great way to only start profiling when you need to (e.g. via StartProfile or VSInstr /STARTONLY).

Alas, there is still a bug in the Performance Explorer (or, rather, in its build system). If one selects, through the Binary properties of a performance session, to ‘relocate instrumented binaries’, and then proceeds to set ‘additional instrumentation options’ in the Advanced section, the IDE will complain when attempting to start a new profiling session and print this error message in the Output window: “Object reference not set to an instance of an object.”. This is highly annoying because it is helpful to put the instrumented binaries elsewhere (not where the original build outputs reside) AND add specific command line options, such as /VERBOSE and /CONTROL (in conjunction with the others).

I don’t know how to work around this bug; I had hoped it was fixed in 2008, but apparently not. The easiest thing is to not relocate the instrumented binaries. Perhaps one can add another build configuration that includes the /PROFILE linker option and uses its own folder as the build output destination where the binaries can be instrumented in-place.

Here are some helpful resources about using the Performance Tools:
http://blogs.msdn.com/angryrichard/archive/2005/01/16/354194.aspx
http://blogs.msdn.com/scarroll/archive/2005/04/13/407981.aspx
http://blogs.msdn.com/ianhu/archive/2005/06/09/427327.aspx

Drupal request_uri returns broken absolute URLs when served from an Apache VirtualHost

After upgrading my Drupal site to the latest point release (5.19), before transitioning to version 6, I discovered that the action parameter of all rendered forms were being prefixed a slash ‘/’. This in itself is reasonable since one would expect the REQUEST_URI server variable to return a relative URL. However, as I host several websites, Drupal runs inside its own VHost with URL rewriting to enable short URLs (without the ‘q=’ querystring). If I access Drupal directly via the web server’s address, which maps to the default VHost (this also happens to be Drupal), REQUEST_URI is relative. If I access it via the address externally visible on the web (http://spench.net/) then REQUEST_URI is absolute (i.e. prefixed with the protocol and domain).

The Druapl ‘request_uri’ function in includes/bootstrap.inc always adds a slash to the beginning of the server variable REQUEST_URI, which will break any absolute URLs.

The following modification (the strpos check) resolves this issue:

/**
* Since $_SERVER['REQUEST_URI'] is only available on Apache, we
* generate an equivalent using other environment variables.
*/
function request_uri() {
  if (isset($_SERVER['REQUEST_URI'])) {
    $uri = $_SERVER['REQUEST_URI'];
  }
  else {
    if (isset($_SERVER['argv'])) {
      $uri = $_SERVER['SCRIPT_NAME'] .'?'. $_SERVER['argv'][0];
    }
    else {
      $uri = $_SERVER['SCRIPT_NAME'] .'?'. $_SERVER['QUERY_STRING'];
    }
  }
  if (strpos($uri, '/') == 0)
  {
    // Prevent multiple slashes to avoid cross site requests via the FAPI.
    $uri = '/'. ltrim($uri, '/');
  }
  return $uri;
}

Microsoft Visual Studio 2005 crashes when typing into a code window while debugging

When debugging my VC++ projects, I routinely make use of ‘Edit & Continue’ by modifying code in the Visual Studio environment while stepping through the compiled code. Recently I found that typing into a code window would cause Visual Studio to become unresponsive for a long period. Checking Task Manager, I found that WerFault had kicked in and was generating a minidump for DevEnv.exe – Visual Studio was clearly not happy. The unusual thing was that Visual Studio would not immediately terminate after the minidump finished. Instead, it would continue running. However, if I would now close the debugee (my own application), attempt to continue execution with modified source code (which would activate Edit & Continue), or manually stop the debugger, Visual Studio would crash.

After a very thorough search (looking at Visual Studio’s stack trace using symbols from the Microsoft Symbol Store, seeing exceptions thrown from HeapFree and IDebugEncLineMap, un-installing various plugins such as AnkhSVN, and duplicating the tests on another machine), I could not identify the cause. Thinking about any configuration changes I had made in the last months, I tried removing the all additional Include & Library paths I had ever added to the VC++ directory options. These can be found (on Vista) in:

C:\Users\User Name\AppData\Local\Microsoft\Visual Studio\8.0\VCComponents.dat

Voila! No more crashes! As it turned out, I had added so many extra Include/Library directories (each with a long path) that some portion of the debugger would crash when reproducing the steps described above. Perhaps this is a buffer overflow? I had a look through the patch listing on Microsoft Connect, but found nothing matching this problem.

Now, I wanted to have my cake and eat it too: I resolved this issue, while keeping all of my configured Include/Library directories, by creating a hard-link (or NTFS reparse point) to the base of my SDK directory (under which all of the Include/Library directories reside) further up the directory chain, in effect shortening the path lengths.

For example, here are two Include directories:
C:\Documents and Settings\User Name\My Documents\Visual Studio 2005\Projects\_SDK\Some Code\include
C:\Documents and Settings\User Name\My Documents\Visual Studio 2005\Projects\_SDK\Other Code\include

Lots of these will crash Visual Studio.
They both share the parent: C:\Documents and Settings\User Name\My Documents\Visual Studio 2005\Projects\_SDK

So create a reparse point at: C:\Dev\SDK And make it point to the parent above.

So now in Visual Studio, you can change the original Include directories to:
C:\Dev\SDK\Some Code\include
C:\Dev\SDK\Other Code\include

In total, they are of a much shorter length and fall under the mysterious buffer limit, thereby avoiding the dreaded crash scenario.

NTFS soft/hard-links can be made using the command-line tools at the bottom of this handy page.

Changes in ActionScript code not reflected in published content

While working on a Flash project in Adobe Creative Suite CS4, I was editing some ActionScript 2.0 code (AS) stored in a separate code file from the main Flash document (FLA). I decided to revert to an older version of the whole project, thereby going back to FLA and AS files with an older timestamp. Publishing the document proceeded as normal, however when I went to preview it I found behaviour that would indicate that the newer AS code was still being compiled, even though I had gone back to using old code. After using Process Monitor and highlighting the code file in question, I discovered Flash caches compiled classes (to ASO files) in a special directory (under Windows Vista):

C:\Users\User Name\AppData\Local\Adobe\Flash CS4\en\Configuration\Classes\aso\...

I suspect under previous versions of Windows, it would be the same directory structure under:

C:\Documents and Settings\User Name\Application Data or
C:\Documents and Settings\User Name\Local Settings (If you know, please leave a comment!)

I guess that as the old AS code file had a timestamp older than the cached ASO file, the cache was not refreshed and the original code was used.
There are two options: delete the ASO files, or ‘touch’ the necessary AS files with a newer file time.