Thursday, December 29, 2005

Mass Mailing: 2061 Connections - November/December 2005

The November/December 2005 issue of 2061 Connections went out on 29 Dec 2005 at 1:11:59 PM to 3808 recipients.

Scripting: Mass Mailer Updates

I've made a few relatively minor updates to the mass mailing script.
  • I noticed that the sender address entered on the form was not reflected in the actual e-mail sent out. Turns out I forgot to add a parameter/replacement for this value in the e-mail templates. This oversight has been corrected.
  • Added a check for sender and subject so that default information could be added if necessary.
  • Updated the EOL fix in the fnc_qpEncode() function so that it is a bit more streamlined.
  • Updated the BOL period replacement to place the encoded period on a line by itself. This was necessary to avoid line length overflow.
  • Updated the recipient replacement code to run in two stages. The first stage replaces the recipient information in the body of the message an encodes it using quoted printable. This helps avoid line length overflow on long e-mail addresses. (This code is in addition to code from fnc_qpEncode() that places any placeholder variables on a line by themselves.) The second stage replaces any other occurrences of the recipient address placeholder, which should be just in the message header.

Tuesday, December 20, 2005

Mass Mailing: Workshops

The Atlas workshop schedule was sent out on Dec. 20, 2005, at 4:02 PM to 2444 e-mail alert subscribers and 50 one-time notification addresses.

Thursday, December 15, 2005

Graphic Design: Assessment Items

Spent the better part of the last week working on images for the assessment project. Luckily most of the illustrations I had to made were copies of ones that were already on hand, a few with minor modifications. One didn't exist and I had to create it from scratch, which was a significant chore even though I was able to work mostly from pictures I found online.

I certainly don't mind devling into graphic design, page layout, or whatever else needs to be done. Unfortunately the time frames are usually fairly short and with my lack of experience in these areas I'm often left feeling like I'm struggling.

No doubt changes to the current illustrations as well as additional ones will be needed in the future, at which point I'll know if the work I've done is acceptable or not.

Friday, December 09, 2005

Scripting: new 404 page

I've rewritten the 404 page to make the script more efficient as well as friendlier to users.

One reason for the update is that I wanted to remove the possibility of multiple redirects, especially if the end result was a 404 anyway. I modified the script so that each time it makes a change to the URL it uses a combination of the file system object's FileExists() method and the Server.MapPath() method to check and see if a file actually exists. If a match is found then the redirection is made, otherwise it keeps on checking. If no redirection can be made the browser is never redirected and the user receives the error page.

I also removed the automated javascript redirect. Any automated redirects (such as matches from the 404Redirect.txt file) are done via 3xx status messages. Non-automated redirects are now done via a link on the error page. Most of the 3xx redirects are pages that we've moved or shortcut URLs we've made public and so it's preferable to just have the redirection be as transparent as possible. Also, 301 redirects are benefitial to search engine rankings for the majors because the new page will inherit the old page's rank.

One drawback to doing 3xx redirects is that page anchors aren't carried over (depending on the browser). Page anchors were one reason why I had initially used JavaScript redirects ... Science NetLinks was using them to link to portions of Benchmarks Online. SNL doesn't appear to provide many links to our content anymore, so I'll just wait and see if we get any complaints. I can still use JavaScript to tackle the page anchor issue on link redirects.

On top of the change to redirection methods I've also made some other modifications to the code that I hope make it more efficient. One of the bigger changes is that I'm using regular expressions to parse the URL which seems to provide more accuracy in getting the various parts. I also had to add a default document check for instances when the user enters just a directory. I think the script could be made more efficient, but it's looking significantly better.

Oh, and I've fully documented the script as well.

JBJ is helping me with the text of the page. We're just about done with editing, etc. Hopefully I'll be able to post the update on Monday.

On a related note, I really find working with JBJ on our web pages really helps with developing concise and understandable text, an important consideration for web-based content. Sometimes the process is a bit frustration, but I think the combination of his writing skills and my tech knowledge leads to some pretty good "help" material.

I've also linked a contact form that e-mails me when someone wants help finding content. Fully documented as well.

Notes:
  • I used a regex tester while developing the regex for this code ... it helped a lot
  • Some minor changes were made between Win2k and Win2003 with regard to the way using a custom 404 script works. One apparent result is that, based on the URL in the browser's address bar, Win2K would appear to redirect asp pages to the 404 page while Win2003 does not.
Update 2005-12-29:
I've finally had a chance to review the content of the 404 page with JBJ and it's been updated. When I uploaded the page and did some testing I ran across a couple of errors that had to be fixed.
  • Using On Error Resume Next at the top of the page can cause serious problems with If...Then statements.
If...Then statments break if a syntax or runtime error is encountered. The "next" line in the code is executed, that which is inside the If...Then code block, essentially a interpretation of the conditional as True. The only method I know of to get past this problem is to embed another If...Then statement testing for a value in the Err object.

I'm disabling the global On Error Resume Next statement. There are few statements that would outright kill the script on the global level. The main source of error messages so far has been the file check, which I've moved to a function that has error handling. If any further code presents problems I'll do the same.

I'll need to keep an eye on any 404s and do some testing. Since this is the 404 page the user should either see the "Page Unreachable" message or be redirected to the correct page. The user getting a script error is not a viable option for this page.

For more information, see Fabulous Adventures in Coding: Error Handling in VBScript, Part One
  • If a redirection from the 404Redirect.txt file contained a querystring component it would kill the script.
I worked around this by checking for a querystring in the txtURI variable when testing a redirect match. If one is found it is pulled out and added to the txtQueryString variable.
  • Fixed a problem with the default document check
The file test of the default document check was using the wrong value and thus would fail if a directory name was used without the trailing slash.

Also, I made default document check result in a 3xx redirect. More likely than not this is the desired action.

Tuesday, November 22, 2005

Scripting: ActiveMenu script update

There was a problem with the ActiveMenu script when an image in the story detail was not square. Up until now the height of the image was used to calculate the height required by the story. Unfortunately this would cause the calculation to be either too much or too little when the image was not square. To more acurately calculate the "area" used by the image, and thus the height used by the story, I modified the script to use a line bisecting the image diagonally as the basis for the calculation (currently 45% of that line's length). This is easily done using the classic hypotenuse calculation.

The code library has been updated with the new script.

Monday, November 21, 2005

Scripting: Win2003 IIS and SSI

Ever since upgrading to the new web server the page survey has not been functional. Upon further inspection it appears that a script accessed via SSI #exec in Win2003 has a number of problems when compared to Win2K:
  • no access to the originating page's query string
  • SCRIPT_NAME returns asp page rather than originating page
The former problem causes a page to not know that the survey has been submitted.

The latter problem causes a page to never display the survey in the first place (surveypage.asp is on the list of pages that are to not have the survey shown).

So far I have been unable to find much useful information regarding SSI in Win2003. SSI doesn't seem to get much attention these days beyond the basic how-to. I've posted a question to one of the Microsoft newsgroups, hopefully I'll get a useful answer.

See my post for more info: SSI #exec and querystring passthrough

Update 2005-11-28:
Of the two options I noted in my post linked above I've decided that the quickest method of addressing this problem is to associate the HTML files with the ASP processor. I'll test on the development server for the next week then implement on the live server.

Update 2005-12-05:
A response to my newsgroup posting confirms that there was a change in the way #exec operates. I've seen no major problems using the ASP processor on the development server so I've gone ahead and implemented it on the production server.

Tuesday, November 01, 2005

IERI: Revision thoughts

Just a quick post on some thoughts for the architecture for the revision of the utility.

Architecture
One of the purposes of the rewrite is to make the utility more flexible. Currently a user is required to have a fairly modern browser with JavaScript enabled and the RealPlayer plugin to use the utility to it's fullest potential. Ideally the utility should have a base level of functionality even in the most crippled of browsers. One way to get to this level of functionality is to use progressive enhancement to add enhanced functionality through JavaScript on top of an already functional HTML core.

The way I envision modeling the utility so that this can be acheived is to go back to a classic three-tier system: presentation, logic, data. By further extracting the functions that handle data processing from flow control and interface presentation I hope to be able to create a functions that can response to requests from a variety of sources (rather than using conditional statements to determine the output). I imagine the easiest way to do this is to develop the functions as discreet scripts that process requests and return XML over HTTP (similar to web services I suppose). That way the interaction of the front end to the back end does not depend on the the source (browser vs. server script), allowing the backend to be simplified somewhat while allowing the front end to branch out.

I also think that modeling the utility in this way will make updates simpler. The difficulty of editing the processing page gets worse over time.

Video
I've posted recently on problems with the timecoding, so no need to go over that again. Another issue, though, is that I'd like the video portion of the utility to be less restrictive. The utility should be able to handle formats other than RealVideo and players other than RealPlayer. If the security issues can be addressed and the object model of the players determined then it should be fairly trivial to determine which player to use at runtime.

Security
One of the biggest problems with the current utility is it's security. The authentication system is ugly and not too secure. The user rights assignements are too broad. There's no way to sandbox users beyond limiting their rights (e.g. limit the Deleware group admins to administration of their own users).

I'm thinking that to work around this I'll create a permissions table that stores information on who can perform what functions. This should allow a finer level of authorization control. One area that I hadn't originally thought about until just now is the sandboxing of users ... something I'll definitely need to consider.

IERI: Forums updated

I needed to update the compatibility information for the utility. While testing is not difficult, accessing the wide range of browser/OS combinations is. I decided that rather than trying to maintain a list myself I'd try something new and post compatibility information in a forum. In this way the users could post their own experiences so that broader compatibility information could be obtained. Also, in this way I could also post specific compatibility problems as related to a browser/OS combination plus any work-arounds that would address the problems.

While this could have been acheived with the older forum scripts, I decided that it would be a good time to update the scripts with modifications I've made since the IERI forum was initially released. Most of the updates are fairly minor.
  • Previously a different set of scripts had to be written for each forum. Now, each forum is given an index which the scripts use to determine which database tables to access.
  • The authentication of the forums has been better integrated with the utility's users table.
  • Users now have a setting that will trigger a notification when a new post has been added (currently a universal setting).
I made a few modifications to the scripts as well to accommodate some special options that were requested for the activity level forums. Actually, the request was for the ability to add notes to the analysis. I made the decision that the forums would sufficiently address this need by adding the ability to record whether a post related to the activity analysis as a whole or to a specific sighting analysis. These do require some specialized code that target a specific forum, but I think this trade-off is worthwhile in this particular case.

Wednesday, October 19, 2005

2061 Connections mailing for September/October

The September/October 2005 issue of 2061 Connections was sent out Wednesday 19 October, 2005, at 2:51 PM EST to 3863 recipients.

Friday, October 14, 2005

Death of a Server

I've had to rearrange my workload to accommodate one of those things that happens at one time or another to everyone who uses a computer ... a crash. The web server suddenly kicked it the other night with no warning. Near as me and my trusty cohort (DP) can tell something went wrong with the hard drive. The error seemed to be with the controller initially (it kept complaining about the nvram), but placing the drive in a new machine has shown it to have faulty hardware. Unfortunately the drive, while on a raid controller, was not part of an array.

Luckily the web site has plenty of backups between the staging site, the development site, and the daily tape backup. We were able to load the files on a new server within minutes. Unfortunately, though the new server was going to eventually take over as the web server eventually, it hadn't yet been set up. DP and I spent a few hours getting the bare minimum running. I haven't had much time to create documentation for the site set-up so I'm having to do everything by memory. I've been able to remember most of the necessary settings, but I am having to do a bit of trouble-shooting at the same time. I'll need to spend some time writing up some documentation in the near future in case something like this happens again.

Wednesday, October 12, 2005

Site hack

Someone from IP 217.218.155.73 attempted to hack our site through some of the forms. Truth be told, I don't think the kiddies were truly attempting to hack the site. Based on evidence in the log files it looks more like they were trying to find code to exploit for the purpose of spamming. Still, without more investigation I can't say whether or not any real hacking attempts were made.

At any rate, the pages that were obviously toyed with were the search redirector (search.asp), the page survey (surveypage.asp), and the one-time notification page (notify.asp).

The search redirector does not really provide much of a vector for exploitation. It's main purpose is to redirect to the AAAS search and so there isn't much available for exploit. Some of the variables can be toyed with to crash the script, but that's about it at this point. I'm not too worried about it because the script won't crash unless the user makes an attempt to cause one.

The one-time notification does provide for a little more opportunity for exploitation since it connects to the MS SQL server. Most of the script is fairly benign, but an SQL INSERT is used. To mitigate any possible hack attempts I'm SQL encoding any user-supplied input. On the plus side, even if some malicious SQL code is run via the script the connection to the database is with a limited access SQL account.

The page survey proved to have a bit of a security oversight in the coding. Luckily the database used for that is MS Access, leaving fewer options for the kiddies. The page does access the database using a simple SELECT statement, however, which left the page open to possible hacks. I modified the statement so that user-supplied input is now SQL encoded.

While SQL encoding user-supplied input is a good start I'm not entirely certain that it prevent the database from being hacked. I'll need to do some research to see if there are any known problems when using the MS SQLEncode() function. Probably an even better strategy would be to plan on dropping direct SQL statement execution and move towards stored procedures or parameterized statements.

I'm going to check for any further hacking from the IP range associated with this probe.


Update 2005-10-13:
Upon further checks I didn't really find any other hack attempts of note. There were more of the same hack attempts from different IPs. We're likely listed in some kiddies search list. On the positive side, it pointed out some possible security lapses on those scripts mentioned.

Webtrends irregularity

Webtrends crashed while running reports. Without some significant research I don't think I'll be able to determine the cause, so for now I'll just note the cause. The following two lines seem to be the source of the problem.

2005-09-07 19:42:49 202.147.177.70 - W3SVC12 PROJECT2061 198.151.218.130 80 GET /cgi-bin/404.asp 404;http://www.project2061.org/tools/benchol/ch2/ch2.htm 301 0 737 655 16 HTTP/1.0 www.project2061.org Mozilla/4.0+(compatible;+MSIE+5.0;+Windows+98;+{8EDEB5C2-2006-EB20-6F16-6A2EFD0A5822}) - http://www.google.com/search?q=uses+and+application+of+maths+in+every+day+life&hl=en&lr=&ie=UTF-8&start=30&sa=N

2005-09-07 20:14:52 202.147.177.70 - W3SVC12 PROJECT2061 198.151.218.130 80 GET /cgi-bin/404.asp 404;http://www.project2061.org/tools/benchol/ch2/ch2.htm 301 0 595 624 15 HTTP/1.0 www.project2061.org Mozilla/4.0+(compatible;+MSIE+5.0;+Windows+98;+{8EDEB5C2-2006-EB20-6F16-6A2EFD0A5822}) WEBTRENDS_ID=202.147.177.70-1418417440.29733860::8840CACE7014C0BFF54DA08DCA0E1F1B -


Update 2005-10-13:
Found some more lines causing a Webrends crash. More of the same, I suspect the cause may be the user agent string having the curly brackets {}. I didn't see it before because I was running on just the web content, so not all those lines were being loaded. I'm not going to bother noting the lines below; they're on the same IP and thus easy enough to find in the log.

Thursday, October 06, 2005

IERI: Firefox update breaks movie playback

The new version of Firefox (currently 1.5 beta 2) has new security restrictions which prevent plugins from accessing files on the local system if the originating web page is on the Internet. This will be a big problem with IERI because the current setup requires that the movie file be local. I was able to confirm the problem by trying to view and control a movie through the utility then saving the page with the embedded movie locally and trying again.

Note: this is an existing problem with Safari. Considering the current trend in browser security I expect the other browsers will likely follow suite in the near future.

There are a couple of solutions we could consider:
  • Code signing
  • Creating a custom application for the IERI utility
  • Creating an IERI "application" that consists of local html files for the shell
  • hosting the movie files on our servers and streaming them
Code signing
This is probably the easiest solution to implement, but requires one of two things: a lot of money or a lot of time. Money if we were to purchase a certificate from one of the recognized authorities (such as VeriSign). A code signing certificate is expensive, even at the cheapest of providers, and has to renewed regularly. It is possible to obtain a free certificate from a provider such as CAcert, but it requires a more intense verification process whereby you actually meet with people to confirm your identity.

I know for certain that the problem can be addressed in Firefox by using signed code to modify a user setting when the movie plugin page is accessed:
try {
netscape.security.PrivilegeManager.enablePrivilege("UniversalPreferencesWrite");
navigator.preference('security.checkloaduri', false);
navigator.preference('security.checkloaduri', true);
} catch (err) {
document.write("Sorry, you can not enjoy this site because of " +err+ ".");
}
However, I'm not sure if the various browsers even support a common method of code signing ... though I'm pretty sure they do not. If this method is going to be useful it would be nice to be able to use it in more than just Firefox. Since Safari has a similar problem already any method should also be able to address the problem in this browser as well.

References confirming access restrictions for local files:
References for code signing:
Hosting & streaming movie files
This method, while probably not any less expensive than code signing, seems to be the best of the options. The main reason I say this is because it will require minimal rewriting of the utility and a minor investement in hardware infrustructure (for movie storage ... namely some large hard drives). If we really want to put some work into it we could also build a media management system on top of the storage system (something like the DAMS system at UMich).

EK seems to like this idea, so perhaps we'll investigate it a bit more.


The other two options presented are really viable. While it would provide a significant amount of flexibility in interface development, modifying mozilla to suite the needs of the utility seems a daunting prospect. And creating some kind of downloadable HTML core would likely cause more problems than is solves (mainly due to cross-site security restrictions).

No matter the solution we choose, this problem also present complications for the rewrite of the utility. The whole timecoding mechanism will need to be reworked so that this problem can be mitigated (I doubt we can avoid it completely). The rewrite may be the best time to implement one of the above options.

Monday, October 03, 2005

Thursday, September 29, 2005

Bug: Internet Explorer standards mode and element scrollbars

I've run into a problem with my ActiveMenu script. When IE6 is running in standards mode (vs quirks mode) the width calculation of a box gets a bit screwed up because IE6 doesn't take into account scrollbars while calculating the width of an object. This means that objects with a defined height and overflow: auto or overflow: scroll will end up wider than either a calculated or assigned width. The width discrepency is a problem because the presence of scrollbars modifies the working width of a box, which affects the layout of the box.

While I was able to find a few mentions of this problem, no real information seems to be available. Certainly no indication of a fix through either styles or scripting. See:
Unfortunately none of the big guys out there seem to have documented this problem. I may post a bug report on QuirksMode.

How this afffects Active Menu
If no width is specified for the story box then the area next to the floated headline list is calculated and used. Regardless of whether the width is specified or calculated, if scrollbars are added the story box is widened and no longer fits in the area next to the float. As a result the story box gets pushed below the float.

I worked around the problem by modifying the width of the story box when the document.compatMode property indicated that the page was being rendered in standards mode (via PPK) in IE6. One drawback of this approach is that the width is no longer flexible in IE6, though I could maybe add a function to correct it onresize of the parent div. I should maybe also consider disabling this check if the user has set the width of the story box in via CSS, under the assumption that they have taken the scrollbar problem into consideration.

I haven't tested in other versions of IE.

I'm not sure if this is also a problem with height calculations, though that would likely cause few problems with the document layout.

Friday, September 23, 2005

Mass e-mail list script updates

I modified the database/scripts so that actions performed by end users (subscribing, unsubscribing, updating e-mail address) are recorded in a separate table. The reason behind this update is so that I can keep the e-mail subscription database and the ichaos database synchronized.

The update is a simple INSERT that runs after each list update. The new table (MassEmailManage) has columns for old email address, new email address, action, and list id. I'll manually check it once a week or so to see if any ichaos updates need to be made.

Also, it'll be interesting to see how much activity these pages see.

Thursday, September 22, 2005

Home page redesign

I've pretty much completed the redesign of the home page. While the initial design phase was pretty easy the follow-up of creating a structurally clean, progressively enhanced page took a bit longer than I expected.

The page consists of four sections at present:
  • header
  • "What's New" headlines
  • quick links
  • brief introduction
The header and introduction were fairly straightforward. A float on the logo was the most advanced styling used. The other two sections took much longer since I wanted the page to work structurally for browsers that couldn't placed the headlines in the dt and the related story in the dd. If an image was also used, it would be placed in another dd above the story.

The next step was to style the list for browsers that aren't DOM-compliant (since I use the DOM extensively in the creation of active portions of the menu). What I wanted was fairly simple, the headline having a background color spanning the width of the content area with the entire area clickable. Below the headline and indented much like a normal definition list is the associated story. If an image is present it would be floated to the left of the story. I was able to achieve the design I wanted in modern browsers and most of the based CSS is used in the interactive version as well. In Nav4 the menu looks pretty much like a normal definition list, but I decided to hide the images because Nav4's styling capability made it difficult to get the desired effect.

The script runs once the page is loaded. Since this is a first version, the structure expected is pretty strict (I expect I'll be able to clean it up when I work on future versions). The script applies a class to the menu so that the styling used for the interactive elements is added. The script creates a styled unordered listed (ul) for the headlines. The headlines are pulled dynamically from the dts and so no extra work beyond filling out the definition list is required when implementing. I used a lot of DOM coding to generate the list such as createElement, appendChild, and insertBefore.

I have yet work on the printable view, but I'll probably hammer it out while I'm documenting the code.

Multi-Column List
The design for the quick links was fairly advanced when considering the capabilities of current web browsers ... a two-column list. While I could have easily used a table or two lists and floating or some other mechanism, the desire to maintain a semantically meaningful structure made these options undesireable.

With this in mind I tried to find a method of spreading a list across two columns that would not require breaking the list into parts. Plus I wanted to keep the amount of work required to edit the list at a minimum. After a bit of searching I found an article about multi-column lists on builder.au that gave me a good start (later ALA published their own article on this topic as well). I ran into a few problems during development ... mostly with IE (see below). The multi-column list feature I ended up with isn't nearly as robust as I had hoped to make it, but it performs the intended function pretty well.

The list script relies on a standard list (ul or ol). The user doesn't need to do any prep-work beyond applying a class to the list, attaching the JavaScript, and attaching the base stylesheet. Of course extra styling can be done by the user, and I had to do this for the home page so that I could make the multi-column list function flexible enough for library code.

Once the page loads the script looks for any list with the required class. Once a list is found a second class is added that provides the base styling needed for multiple columns (though I'm thinking of moving some of this functionality to the script ... we'll see once I start working on the documentation). The script determines the height of each list item and then uses that information to decide where to divide the list (right now the script can only handle a two-column list). The first element of the second column is given a top and left margin so that it sits next to the first column. The following list items are then given a left margin. Finally, the last element of the list is given a height value so that elements following the list don't overlap it.

There are a couple of issues and enhancements I'd definitely like to make. Currently I have the list item markers disabled due to various browser problems. The list doesn't handle width resizing at all (can I use percentage for the positioning?). I had to use a recursive function to watch the height of the list to handle height resizing (there may not be any other way around this, but maybe percentages or ems can help here as well).

Again, I have yet to work on the printable view, but I'll probably work on it while I'm documenting the code.

References
IE quirks
Unrelated to this work I came across an article that provided some information that helped during development. On the MSDN site there's an article about the hasLayout property. The property seems fairly innocuous, but reading up on it is important for understanding some of the styling weirdness that can be encountered in IE. Even better, there's a link to another article that gives a lot of good info, including how styling a list with layout can get ugly.


Now that I'm done with the home page I'll be spending a few days documenting the scripts for the code library.

Thursday, September 08, 2005

JS Library: Show/Hide Content

I've finally, finally, completed some code documentation. This is a little JS I wrote to hide content and then show it when a link is clicked. It's nothing special, but I've done my best to make it as easy to use as possible.

Show/Hide Content Files

Wednesday, August 31, 2005

404 Script

I updated the 404 script so that the entries in the 404Redirect.txt file can now include an entry to indicate a permanent redirect (301) rather than the temporary redirect used by ASP when you issue a Response.Redirect (302). The update will go public tonight, tomorrow I'll spend some time getting rid of some of the legacy directories on the site.

Home page redesign

I've completed the design work for the new home page. Most of the modifications requested after the first version were minor and so I was able to spend a little time fleshing out the structure that will be used. I'd like to structure the content as simply as possible, using JS and CSS to implement the advanced functionality. I've already got some ideas about what areas should be dynamically generated. The only part that's giving me a little trouble is the two-column link list ("Quick Links"), so I'll need to spend a bit more time on that.

We're meeting tomorrow to go over the design and make our final decisions. Hopefully I can have the new page up in a couple of weeks.

Monday, August 29, 2005

Testing Adobe Illustrator's SVG Output

FM asked me to take a look at creating SVG documents from Adobe Illustrator so that we could determine what needs to be done to make the documents structurally coherent. Here's a brief summary of the information I've gleaned from minor testing (using Illustrator 11.0.0).
  • The SVG width and height are defined by the elements contained in the Illustrator document, not the page layout, using a coordinate system similar to the SVG coordinate system.
  • Every element that is explicitly named is given an id attribute matching that name. Spaces are converted to underscores. Non-alphanumeric characters are converted to hex-encoded values of the form "_x5F_" (an underscore). Duplicate element names are modified by adding "_#_" where # is a number starting at zero.
  • Each layer and group is surrounded by a g tag.
  • The elements are created bottom-to-top if looking at the Layers palette.
  • Using the rectangle tool will produce a rect element in the SVG. Fill and stroke on a rectangle are defined on the rect element.
  • Using the rounded rectangle tool or stylizing a rectangle with rounded corners will produce a path element in the SVG. Fill and stroke on a path are defined on the path element.
  • Text is rendered as a text element transformed from 0,0 with multiple positioned tspan elements. The tspans appear to be created based on kerning and spacing rather than line wraps.
  • Text created using the Type Tool is rendered as an individual object.
  • Text created using the Area Type Tool is rendered as a group which contains the object defining the area and the text itself. If the area text is named in Illustrator the SVG group, object, and text are all given the same id.
  • Using the Line Segment tool creates a line element in the SVG. Applying a filter style to create an arrowhead creates a group with the original line and a path defining the arrowhead, this renders to SVG as expected. Applying an effect style to a create an arrowhead redraws the line as a path, but when rendered to SVG the line is created as a group containing another group, containing the original line and a path defining the arrowhead.
Note: In Illustrator filters and effects affect the object on which they're used differently. Generally filters are non-editable, effects are editable.

Friday, August 26, 2005

Workshop E-mail Reminder

The workshop reminder was sent out Friday, 26 August 2005 at 2:34:16 PM to 2395 recipients.

Wednesday, August 24, 2005

Scripting: show/hide content script update

I've made some major changes to the show/hide content script. I've been trying to write or rewrite much of the client-side code we use with the ideals of progressive enhancement in mind. The show/hide code pretty much went the opposite direction when I created it, but now I think it's more aligned with the goals of PE. Read on for a quick summary of the changes.

Simpler implementation

I was able to change the implementation process so that only three steps are required:
  • Create the show/hide content inside an element or surround the desired content with a div or span. Place an anchor inside the element and give it a class of shContent.
  • Attach the script file.
  • Attach the default stylesheet. Made any additional style declarations as desired.
A complex system of divs and classes is no longer needed to indicate what text is part of the show/hide routine. The script now looks for the anchor and then performs all actions on the container tag. Not only does this simplify implementation, but it also fixes some of the problems I noted in a previous post regarding structure vs. compatibility.

Updated anchor linking

Links coming into a page can now use the show/hide content anchor rather than having to create an extra anchor on the show/hide action link. The script will then determine where the action link is, activate it, then scroll to that position.

The exception being browsers that don't run the code (such as Nav4). These browsers lands at the show/hide content rather than the action link. This is not the preferred action but I think it's acceptable. I'll continue to investigate to see if I can get it to land on the action link.

I also added a "scroll to" action to the script for non-show/hide anchors accessed when first entering a page. This addition was necessary because Firefox jumps to an anchor prior to running the script, causing the page to shift after the final scroll position has been set. This is a problem since the page can shift significantly when more than a few show/hide sections are present on the page.

Bug fixes

Links to other pages with the an anchor matching that of a show/hide content anchor on the current page was broken with the current update. This bug was caused by the removal of the shLink class, which was previously used to determine the action link. I now have the script check the page each link points to in order to determine if a link is supposed to affect the show/hide content display.

CCMS web site logging

A few weeks ago MK asked if we could generate some usage statistics for the CCMS web site. At the time I was unable to find any historical usage logs. After a lot of research and examination of configuration files I had things set up so that the logs were rotated. I checked in on the log files today to discover that there were only a week's worth of logs available. Since there were some logs available, but not the amount that I had expected, I figured something was removing old log files. I did some more investigation, looking mainly at the cron tab. As it turns out the daily cron job was deleting any log file older than seven days. I commented out the offending line and will check tomorrow to make sure everything is working as expected.

Wednesday, August 17, 2005

IE CSS bug: margins

I was working on the CCMS web site stylesheet to update it with some things I've learned over the past six months, and to make the pages more friendly to Macromedia Contribute, when I ran across an IE bug. If you have an object on a page that's positioned absolute followed by a block-level object the latter object's top margin will not be calculated properly. Actually, it won't be calculated at all ... it's set to zero. Explicit definition of the margin in the stylesheet for the latter object does not solve the problem.

I found a reference to this bug on the Channel9 Wiki page Internet Explorer Support for CSS (look for the bullet "Incorrect Margin Calculations"). No work-arounds are provided.

See the demo code
Tested: IE 6 on Windows XP SP2 (6.0.2900.2180.xpsp_sp2_gdr.050301-1519)

The only work-around I've found so far is to place positioned objects at the bottom of the page. I'll need to analyze the content to determine if this is an appropriate solution, though, since I've been trying to keep the CCMS site more aligned to the web ideal for design (semantic rather than visual). It's possible to get the desired spacing using padding instead of a margin on the affected object, but if we were ever to make modifications to the design that involved background colors or images we'd run into problems.


Update 2005-08-18:
I decided to use a float on the menu to get around the problems I was encountering. This led to another problem, but this time with Firefox. The navigational menu was being positioned further down on the screen than I expected. Turns out I was running into my old friend the margin-collapse rules. Since the first element on the page was set to display: none and the second element set to float: left, it appears the third element was actually interpreted as the first in the flow of the document.

According to the W3C spec:
Two or more adjoining vertical margins of block boxes in the normal flow collapse. The resulting margin width is the maximum of the adjoining margin widths.
The "body" box and the "content" box come under this rule. Since the content box has a 150px top margin, it was collapsing with the body to have a shared 150px margin. Thus, when the top margin was determined for the navigational menu it was based on the assumption of a 150px margin on the body.

See the demo code
Tested: Firefox 1.0.6 on Windows XP SP2
Browser String: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.10) Gecko/20050716 Firefox/1.0.6

To work around the problem I gave the body a 1px padding.


Update 2005-08-24:
I've completed the updates to the CCMS stylesheets. I had to scratch some visual updates for Nav4 when I realized they made the site's pages unprintable. In the end, Nav4 continues to use the same bare-bones style as in the previous iteration.

Additionally, I had to scrap any efforts to make the CSS more Contribute-friendly. After extensive testing I've come to the conclusion that Contribute's CSS support in editing mode is almost non-existent. I've recommended an update to Contribute 3, which appears to be much improved in this repect.

Tuesday, August 09, 2005

2061 Connections mailing

The July/August 2005 issue of 2061 Connections was sent out Tuesday 9 August, 2005, at 2:21 PM EST to 3878 recipients regular subscribers plus an additional 50 for the KSI participants.

Monday, August 08, 2005

Form reset

FM was curious as to why a form would sometimes reset when you left a page and then used the "back" button to return to it. I've noticed this before and assumed it had something to do with cache control headers. I was right. I did a test using both server-generated headers and browser-simulated headers (meta http-equiv). The results:

Server-generated headers:
Two of the response headers had no effect on the form data: Expires and Pragma. The Cache-Control header, however, did have the effect being tested for when the value was no-store. Additionally, IE also shows this effect when the Cache-Control header has a value of no-cache, though no effect was seen in Firefox.

Browser-simulated headers:
The three meta http-equiv tags of interest (expires, pragma no-cache, cache-control no-cache) do not appear to cause the browser to reset a form on returning to a page using the browser history (back/forward). Note: I tried the full complement of options for cache-control with the same result.

Further testing would need to be done to see what the effect is on other browsers (Opera and Safari being of most interest).

During testing I also found that the browsers treat a hard reload ([Shift]+Reload) differently. IE will always completely reload a page and it's headers. Firefox, on the other hand, will reload the page but not update the headers until the browser is closed.

Friday, August 05, 2005

Link Checking

I told MK I would check all the links on the PSL Web site so I could provide a list of those that were broken. I didn't have an application at hand to do the work, but, luckily, finding a link checker is pretty easy these days. I ended up using the first one I found and I like it. It's fast, and provides a streamlined interface (enter the starting URL and receive a spreadsheet-style output of the results).

Speaking of link-checking ... I have a Firefox extension that'll do it for an individual page. Fairly handy, but I'd been having problems with it not working on our web site. The extension uses the HEAD request verb to determine whether a page is present or not (speeds up the check by not having to download all the data). For some reason every SSI page (that would be every HTML page) on the site would return a 403 error. Turns out that the SSI ISAPI plug-in for IIS by default doesn't allow the HEAD verb. I added it to the list of allowed verbs.

There's no reason that I know to disallow that verb, and it can actually affect the performance of spiders, caches, and browsers. I don't pay much attention to performance with our site since we don't get enough traffic to stress the server, but it's be interesting to see if that has any affect.

Thursday, July 28, 2005

Home page redesign

One of the things that the Web Oversight Committee (WOC) decided we needed to work on is a redesign of the home page. The eternal question on our mind has been whether we need to target our audience that uses the site on a regular basis or cater more to those unfamiliar with 2061. With our limited working area it's really an either/or proposition. To complicate this decision, a number of our most vocal regulars seem incapable of using methods that could get them to the content they want quicker (bookmarking the exact pages/sections, searching, using the left-side navigation).

The survey we posted seems to indicate that we get a lot of return users. Or, I should say the people most likely to participate in the survey were return users. The survey (when considered along-side monthly usage statistics) also seems to indicate that our users visit the site at best once a month. Finally, most repondents seem to indicate that they liked our site they way it is.

Considering this data (which is not scientific by any means) I'm of the mind that we should continue to cater to new users on the home page. I think this tactic will help all our users (new & old) find the content they're seeking. The specific design of our top-level pages needs to be improved, but I think the category method we've chosen is a good way to guide users to the information they need. I think that by updating the home page interface while maintaining the current scheme, removing the "New & Popular" page by relocating content to the relevant category pages, and building out the category pages so that relevant content can actually be found would do a lot more for site usability than just changing the style of the home page.

The category pages present the biggest problem in our current setup. As I've mentioned before, we don't really do a good job of presenting information that relevant, useful, and important to our research on these pages. My current thinking is that we could use these pages to list a few groupings of information: a general overview, what projects we're working on, and relevant tools and findings. Project pages would provide a good place to link to project descriptions/proposals, findings, and relevant tools. A lot of the article pages on our site relate to specific projects, so I think we could link to these from the project pages as well.

I think that kind of redesign would greatly help our users. After that we could even start looking at a customizable home page again.

The WOC seems to think we can make the site better by "fixing" the home page. The fix we're now looking at is to stop using the categories and change the home page into something resembling the current "New & Popular" page. This, even after pointing out that we don't generate enough new content and that our regular users only visit once a month at best. I probably didn't help my case by helping recommend methods of generating "new" content more often. I don't really expect this will do much for our site traffic or usability.

I guess I never made my point strongly enough. I've always been too willing to let people have their own way (even if I think another way would be better for them). Beyond that, though, I think some people are paying too much attention to the proverbial (and clichéd) squeeky wheel. I don't think this is the solution we need.

So much else I could say (on how better content and better use of current content would improve site usability better than more links, on how site traffic has to be in the context of our content and users, on how it's better to address the underlaying problems with the site rather than shuffle the deck) but I'm running out of steam and my thoughts are starting to jumble. Besides, the course of action has been chosen.

I've got a mock-up ready and will hopefully get feedback early next week. I should have a working page in the next week or two.

Once that's done I'm going to return to working on the category pages. If those turn out the way I'm envisioning them I should be able to start talking again about catering to all users rather than those noisy ones.

CCMS Web Site

There have been a number of content updates to the CCMS site the past few weeks. Once I make a little more progress on the 2061 site mini redesign I need to get together with communications and discuss how to improve the structural design and navigation of the CCMS site.

Productivity (or lack thereof)

I haven't been getting nearly as much done as I'd like lately. Not sure what's going on but I'm having trouble focusing on my work. Normally I wouldn't mention this on my work journal, but I'd have to say that right now I'm producing about 25% of what I should be able to produce in a normal week. And it's been going on for at least a month now.

I don't have a plan in mind to get back on track, but I'll need to figure out something. Next month I intend to start spending a lot more time working on the next version of IERI (no, really ... I mean it this time!). If I don't get my head back in the game I'm not going to get very far on that project.

I'm hoping my funk is a sleep and exercise problem. I'm need to work a little harder at getting to bed at a decent hour.

Tuesday, July 12, 2005

Quotation/citation

I've come to realize that I have been mis-using the cite tag (i.e. for the title of a book that isn't actually being cited). The name of the tag itself leads to the obvious usage scenario and yet I've managed to mangle it. The structurally friendly method of marking up a title is, apparently, to use CSS (and I suppose a span tag). I find the span/css method to be somewhat unappealing, but I suppose if the W3C intended it to be that way then I guess I'll live with it.

My research into the cite tag has also led me to the blockquote tag. I've been wondering how the citation for a blockquote should be marked up. Apprently the correct method is to reference a URI in the cite attribute of the tag. I find this somewhat unappealing as well. Most browsers do nothing with the attribute, leaving no visual indicator of the source of a quotation. Obviously the citation is not part of the quote and so the cite should not be contained within the blockquote tag, but the natural relationship between the two is lost otherwise (at least structurally).

This also brings up the nature of the cite attribute. It should contain a URI referencing the source. Some of our sources are web-based, some are not. The URI set has subsets of URL and URN. The URL subset is well-known to me. The URN subset is not. Based on brief research the URN is a unique identifier maintained by discreet organizations (e.g. the ISBN). The URN appears to be preferable thanks to it's universal (i.e. web, print, multimedia) and persistent nature.

I've decided to mark up quotations in a way that is likely incorrect and redundant (at least for the time being). I'll use the q and blockquote tags along with the cite attribute, but I'll also include the cite tag inside the quotation for visual reference. I'd like to use URNs for the citation attribute, but may find it to be a bit difficult to implement at this point since most of our citations are papers. Also, URNs could be problematic since browsers don't currently have URN addressing capabilities.

Even if I stick with URLs, most of the pages on the P2061 site (and elsewhere for that matter) aren't permanently located. I sometimes wonder if I'll ever hit on the right structure. Since we're not likely to implement a CMS anytime soon I'd like to develop some kind of permalink setup using document IDs and a site scanning script that locates the current location of a document. This would allow permanent references.

At some point after I'm done correcting the document title and quotation/citation markup I'll go back and do further research on the quotation/citation question. In the end I'll probably remove the cite tag and use progressive enhancement (read JS) to implement a visual reference.

Source of information for this post:

Thursday, June 30, 2005

Intranet - File Search

FM wanted me to update the file search so that it was easier to use some of the advanced search capabilities like limiting the results based on file name, file location, and modification date. It took me a few days to work out all the details ... there doesn't seem to be quite as much useful information on Index Server (IS) as there are for other MS products (though I did find enough to help me work out what I needed to do).

First I modified the form so that new fields were available for the above-mentioned limiters. The fields are in a shContent div so that they aren't always shown. When the user clicks an "advanced options" link the new fields are shown. Second, I added some conditionals to the query builder so that the new fields are processed.

For the modification date fields I included a JavaScript date picker that I scarfed from http://www.howtocreate.co.uk/jslibs. I had to make a few changes to the script to get it to work more like I wanted. I think it looks pretty good, though it could definitely use some updating. Of course, some of the updates I have in mind could break it for Nav4. Not a problem in this particular instance, but perhaps a bit of a hassle for further usage.

There's a good amount of information stored in the IS catalog (beyond the content index) that can be searched using catalog field references. With this in mind I could have provided even more options for search parameters, but I didn't feel like taking the time to investigate what was available to search and what syntax would have to be used. The changes I've made are really just for FM anyway because nobody else (as far as I know) really uses the file search.

Index server provides a couple of different ways by which to search the information in the catalog. I used a single search method to make it easier to code and use. The file name and modification date are searched via catalog field reference (e.g. @filename index.htm and @write > 2005/06/01). The path is limited using the query utility's scope function rather than searching on the path field.

The field search provides a lot of flexibility when used with wildcards. I've been having some trouble figuring out the various syntax options, but I think I'm starting to get a bit of a handle on things. Still, some of the syntax options throw me off a bit so I would need to do a bit more reading before I felt comfortable using them.

I think the search syntax would be good information to provide were I to develop some kind of help documentation. With that information the user could perform more advanced searches without me having to cook up more options. That's very low priority, though, and I don't expect to ever get something like that done. For the time being I've tried to use what I think are the most generic search parameters so that the user does not have to really know much about the search syntax in order to get useful results. No doubt I'll hear feedback if things don't really work out that way.

Wednesday, June 29, 2005

IERI Utility

KM was having a problem with some of the timecoding she had done disappearing. Luckily there was no need to panic about data loss this time. After our last major data loss I started a daily backup that maintains three months worth of database copies.

The source of the data loss was my own carelessness. When I implemented the sighting-level timecode inside the sightings summary table I copied a lot of the code from the acitivity-level process. In the activity-level process the code checks the timecoding against all activities in the sequence. If an activity does not have a numeric timecode then it is reset to null. The modified code for the sightings did the same. The problem is that when you're working on a particular activity, the sightings for the other activities are not represented. Since no timecoding was passed to the script for those other sightings it assumed they were supposed to be set to null. This is obviously not correct. I addressed the problem by limiting which sightings were reviewed to those in the current activity.

Wednesday, June 22, 2005

IE Border bug, again

Working on the Instructional Components Prototype I've run into yet another IE bug. This seems to be related to a bug I dealt with recently, but I don't know of a work-around. Then again, I didn't do as much research this time. The bug, in this particular instance, appears to occur when a block object with a border contains another block object with a negative margin set via CSS. IE appears to have trouble rendering the containing object correctly. The border appears to break and re-render.

Limited testing done with IE 6.0.2900.2180.xpsp_sp2_gdr.050301-1519

Code to produce the bug (see sample page):
<html>
<head>
<title>IE Border Bug</title>
</head>

<style>
.container {
border: 5px solid #000000;
padding: 10px;
}
.content {
margin: -5px;
}
</style>

<body>
<div class="container">
<p class="content">Questions</p>
<p>blah</p>
</div>
</body>
</html>

Friday, June 17, 2005

CCMS Web Site

Major/minor update for the CCMS Web site. Lots of text, but really not much work to do beyond the usual clean-up after bringing the text in from Word.

I did have to make a minor change to the show/hide content script. There wasn't a function to handle situations where a hash is present in the URL. Luckily I had previously worked on the neede functionality for a version of the script used in the Instructional Components application. After copying in the relevant code I only needed to make a few modifications so that it worked correctly. There are so many versions of that thing running around right now it's starting to get a little difficult to keep track. I should update all versions to a common standard. Perhaps I'll spend some time updating the common scripts used across projects while I'm working on code documentation.

With the content growing I think we'll need to consider modifications to the navigation to make the site easier to use. I don't really have a lot of time to worry about it right now, though, since I need to spend some time updating the Project 2061 Web site navigation.

2061 Connections

The May/June 2005 issue of 2061 Connections was sent out Friday 17 June, 2005, at 2:06:28 PM EST to 3855 recipients.

Monday, June 13, 2005

Server Maintenance

Doing the usual Monday morning thing I saw that IIS SMTP had some messages sitting in the queue that were a bit old. I did a little digging in the log files and saw that as with other instances it had to do with the receiving server on outgoing mail returning a 451 code. This is a pretty common spam-fighting method (one I'd probably use myself if Mercury supported it). It seems to give IIS fits, though. IIS seems to attempt the initial delivery, but retries are never done. The log file doesn't indicate that the QUIT command was sent and I wonder if that has something to do with it. Maybe IIS never realizes the connection is cut and so never retries.

I should report this to MS, but I don't know that it's really worth it. For the time being I've unchecked "Attempt direct delivery before sending to smart host" in order to avoid the problem in the future. I'll need to shut down the service, though, so I can clear out those older messages.

Thursday, June 09, 2005

IERI Utility

I'm in the process of making a few modifications to the utility. Though not terribly difficult I'm doing them quick and dirty. Since the utility is going to require a complete rewrite I don't think it'll hamper future efforts too much.
  1. Updated the text of the indicators
  2. Removed the compare-to-literal rating drop-down from the enacted and its accompanying field for explanatory text
  3. Created a function that allows the user to choose pre-defined idea text
  4. Modified the timecoding so that the user can edit the start and end times manually rather than having to load the movie and click the clock button
  5. Made it possible to timecode the sightings from the summary table on the activity page
While working on (4) I discovered that Firefox has problems interacting with the plug-in through javascript. While it can grab the current elapsed time and jump to a specific time it doesn't seem to be able to start the movie from a stopped state. I'm not going to worry about it too much at this point because when I rewrite the utility I'm going to make it a little more flexible regarding players as well as do more extensive compatibility testing.

Item (4) and (5) together have required quite a bit more work than expected. Though not terribly difficult to do from a programming perspective, my coding style has changed a great deal since I initially created the IERI utility a few years ago. I'm much better now at creating code that's a little more flexible and modular. The functionality will be ported to the new system, but I imagine the actual code will be a little bit cleaner.

I'm not sure when I'll have a chance to work on the utility update, but I hope to begin in the next month or two. A number of other items are still hanging over my head, however, before I can begin. But the real impediment is the constant stream of last-minute work.

Tuesday, June 07, 2005

Instructional Components Prototype

I've been asked to make some fairly minor modifications to the ICP. Mosly I need to modify tab colors and content borders, no big deal.

In the process of testing the changes I've run across an IE CSS rendering bug. If you have a floated element inside an object with a border the border is ... um ... corrupted. It appears ok upon first glance, but if you scroll away and back again the border is broken.

The problem seemed familiar. Had I read about this particular problem before? Not sure, but I'm pretty sure that I've read about similar bugs. A little searching turned up a post on notestips.com (Fixing the scrolling DIV display bug in IE) that discusses the bug. Supposedly IE has problems rendering layered content (which probably includes floats since they are outside the flow of a document). Essentially IE renders layers differently based on whether or not something resides below them (such as a background). To fix the border problem I just added a background color to the containing DIV.

Tuesday, May 24, 2005

SVG Performance

There have been some complaints about the performance of our SVG-based apps (mostly from Mac users). I tried some basic things to see if I could tell where any kind of performance hit was coming from. First I tried an Atlas map that was simpler than the ones we've been using. This definitely made a difference in performance, but things were still not "speedy." Next I tried using some built-in SVG animation on the more complex map to see if it performed any better than scripting-based animation. It was not a pretty site with the normally "smooth" animation degrading to a jumpy, stuttery mess.

I've come to the decision that the main hinderance on performance is the SVG viewer itself. I don't know if the slowdown is due to the complexity of the maps produced by Illustrator or to the number of objects that have to be rendered. I'm working on producing a "cleaner" document in hopes that things will run a little smoother.

In the end, though, I think the calculations to produce and modify a complex vector-based graphic are too processor intensive with the current generation Adobe SVG viewer. Perhaps some better hooks into OS graphical subsystems would help, but that's certainly beyond my means. I'll be doing some research to determine if there are some document coding techniques that can be used to improve performance.

Thursday, May 19, 2005

Scripting - form validation

I've made a decision regarding my automated form validation scripts. Previously I had set it up to not run server-side validation when client-side validation occurred. I'm reconsidering this decision, though. I've decided that the minor save in server load is not worth the potential problems caused by people circumventing both validations with little effort (at least for someone technically saavy). From now on I'll run both client and server-side validations.

At any rate, the real benefit to client-side validation is the immediate feedback provided to the user. On the applications and pages we produce server-based authentication barely registers a blip on server load thanks to low concurrent usage of pages that require it.

Tuesday, May 17, 2005

Components Prototype

FM wanted me to modify the components prototype so that each "tab" had a different color, tab mouseover/selected color change effect, and border around the content matching the selected tab's color. I had to make some modifications to the template to simplify the implementation. In the original the tabs were put in an editable region, which makes a mass change like this the same as if no template were used. I moved the tabs back to the templated region, removed some excess table code, and put the content in a div tag so that I could give it a border. I had originally thought of using template properties to select which tab should be selected, but I decided that it would be just as easy to use JS to do the job. I set up a script that checks the URL on page load and selects a tab based on the results of a search(). The "selected" state of the appropriate tab is set using the same Macromedia function that's used by the Dreamweaver "Navigation Bar" code. I modify the class of the div containing the content for the content border.

I'm using a dark shade for the mouseout state of tabs that are not selected. For tabs that are selected and for the mouseover state I'm using a light shade. This is the style we've used in past navigation. Looking at the result, however, I think I may have to change this. The non-selected tabs are too prominent, especially compared to the lighter shade used for the selected tab. I suspect this has to do with the fact that a different color is used for each tab. Taking this into consideration, I think it would work better to use a washed out color for the "unselected" state and a more saturated color for the "selected" state.

Update 2005-05-19:
Changed the color scheme as indcated above. I think it definitely looks better. The images could probably be even more washed out than they are now, but I'll wait for feedback.

Thursday, May 12, 2005

Search help text

Had a meeting on Wednesday to discuss the site modifications, updates, and new content that's been in the works. One of the things that came out of the meeting is that there are people who are not aware enough of the site they're using to find the search box.

No doubt this is true. I don't doubt some people are incapable of "seeing" the search box, which is prominently located in the upper-left of the page. Personally I think we've done all we can. If someone is incapable of finding the search box even after it is described then they are probably also unlikely to understand how to perform a search at all.

The staff, however, felt that we should provide a bit more "direction" in conjunction with the description. Ideas were thrown around about how to do this, such as arrows popping up pointing in the direcition of the search tool. Or maybe a "guide" popping up and "leading" the user to the search tool. While these might have been interesting exercises in interactivity I think they'd prove to be of dubious value to the user.

As a compromise I set up a link that uses the ID as an anchor, focuses the input box, and flashes yellow behind the search tool. The ID as anchor and the flashing background only work for newer browsers. I did, however, make the input focus compatible with both new and old browsers (after initially ignoring those browsers, but that's not the way I should be coding). Hopefully those modifications will be sufficient to "highlight" the search.

Tuesday, May 10, 2005

Response.Redirect

I've been using the ASP Response.Redirect() function for a long while now without delving much into it's actual functionality. Recently I've been thinking about the various instances of where I've used the function and how it relates to page progression (particularly in web-based applications) and published HTTP standards.

Response.Redirect() produces a 302 HTTP status. Here's the relevant standard:
302 Found

The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field.
This defines a 302 status as a temporary redirect. This is appropriate for many of the circumstances where I've used the redirect function. However, reading the range of possible 3xx status codes reveals others that are more appropriate in certain circumstances.
301 Moved Permanently

The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise.
This defines a 301 status as a permanent redirect, i.e. the page has been moved. This would probably be appropriate for my custom 404 script which redirects for certain URLs. It would probably also be a preferable response for search engine spiders, rather than the current 404.
303 See Other

The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable.
This defines a 303 status that indicates a response can be found at another page. Basically it tells the browser to fetch the indicated page automatically. The page that generates this status is not placed in the browser's history. I think this would be a better choice for web applications (such as IERI) than the Response.Redirect() function.

This response is new with HTTP 1.1, meaning that HTTP 1.0-compliant browsers will not treat this status in the way defined by the spec. Testing with Netscape Navigator 4.x bears out this fact. Rather than fetch the new page NN4 attempts to load the content. Body content redirecting to the new page is necessary for backwards compatibility.

The ASP to define a 3xx code other than that provided by Response.Reis as follows:

<%@ Language=VBScript %>

<%
Response.Status="3xx STATUS_MSG"
Response.AddHeader "Location", "NEW_LOCATION"
%>

<html>
<head>
<title>Object moved</title>
</head>
<body>
<h1>Object Moved</h1>
<p>This object may be found <a href="NEW_LOCATION">here</a>.</p>
</body>
</html>
Where 3xx is the status code, STATUS_MSG is the message, and NEW_LOCATION is the redirect URL.

Monday, May 09, 2005

Site reorganization II

A quick follow-up to my eariler post.

We've completed the help text for the AAAS search engine. I must admit I found the process of creating the text to be quite tedious. That's not to say that it wasn't worthwhile, though. Working with JBJ, who is probably the least technically inclined among the staff, proved to be most benefitial when refining the text. I believe we were able to produce something that is both clear and concise and will benefit our intended audience the most. I doubt I could have done as well with any other staff member. The combined talents of the techno-geek and the techno-novice appear to mesh fairly well for this type of project. As to whether or not we'll see any kind of return on the time investment is yet to be seen.

After a few meetings with JBJ and FM we decided to go ahead and make the page-level survey available on all pages rather than only a few, selected pages. The survey won't be presented every time a user loads a page, however, but randomly. We hope that random appearance of the survey will help prevent it from entering the user's "blind spot."

I'm controlling the appearance through a combination of cookies and javascript. SSI included files are not capable of setting a cookie, which is why I decided to use javascript. The only drawback to this method is that if the user does not have javascript enabled the survey will appear on every page.

I still need to go through and choose the pages that will not be included as part of the survey.

The site-level survey has changed little. The current thought is to link to it from the home page expando banner. I also provided a link from the page-level survey "thank you" message.

I'll need to spend part of this week documenting the survey code (my current policy for new development).

Checking the locks

There were quite a vulnerability exploit attempts on our server. Fortunately, the attempts failed for a number of reasons.
  1. We do a fairly good job of maintaining the security of our server software and applications.
  2. The attempts appeared to be the work of a worm that attacked a software program called ModernBill (which we do not use).
  3. The exploit used a URL which, even if we had the application installed, would not have worked.
  4. ModernBill is a PHP-based application and we do not have PHP installed.
All in all a pretty amateurish exploit attempt. I notified a few organizations from where the exploit attempts originated, but quickly realized that the source of the attempts seemed to be at ISPs and hosting companies that utilitzed the software. The likely response from the owner organizations (as determined based on either ARIN or domain regiatration) would be fairly innefective and I've all but given up on contacting larger companies.

Thursday, May 05, 2005

Web Site Usage Analysis

One of the lines in the 2005/04/26 log file causes Webtreds to crash. Not sure why, but I've commented it out with a pound sign (#).

2005-04-26 15:18:19 200.201.0.30 - W3SVC12 PROJECT2061 198.151.218.130 80 GET /esp/tools/benchol/ch2/ch2.htm - 200 0 58973 582 0 HTTP/1.0 www.project2061.org Mozilla/4.0+(compatible;+MSIE+5.5;+Windows+NT+4.0;+YComp+5.0.0.0) - http://www.google.com/search?hl=pt&q=tipos+de+problemas+matem%C3%A1ticos&btnG=Pesquisar&lr=

Wednesday, May 04, 2005

Mass Mailing

The Spring 2005 issue of 2061 Today was sent out Tuesday 3 May, 2005, at 4:33:17 PM EST to 1145 recipients.

Tuesday, April 26, 2005

Site reorganization

A few words about the work I'm doing for the reorganization.

We're in the process of evaluating the Web site's usability. I already have some improvements in mind that I hope will help the site's navigability significantly. The main problem, as I see it, is the abstraction of our content from its subject matter. Most of what's on the site is presented individually with little connection to it's subject matter. I want to be change that.

There are two items on which we're working in the meantime. One, a document detailing how to use the AAAS search to find content. Two, site- and page-level surveys to find out why people are visiting the site and if they are finding the content they are seeking. The decision to prioritize these items was made by the Web Oversight Committee. The survey is pretty self-explanatory, but based on past experience I do not think we'll get the level of participation needed to guide us. The search help, on the other hand, is more in response to user complaints. While our content is not easily navigable, the new AAAS search is more than capable of helping a user find useful information. Our near-term goal while we work on making or content more navigable is to try and direct people to the search. While I'd rather address the site's shortcomings directly, this will likely prove to be at least somewhat helpful in the meantime.

In preparation for the more fundamental changes to the Web site's structure and navigation I've spent the last four days moving files (and ensuring that no links were broken in the process). I've eliminated some of the directories and consolidated a lot of the files into fewer locations. One of the problems with the current structure is that files are not grouped logically. I've tried to remove extraneous structure that didn't provide any additional value. There's still some more work to do in this area, but the updates should be smaller and more related to navigational modifications.

In regards to these structural changes, I've come to at least one conclusion ... Dreamweaver should not be used to manage a large site consisting of thousands of files. The process has been painful with regular program crashes, connectivity losses (even after changing to file sharing from FTP), and long delays as DW tries to determine what links are affected by a file move. I don't think it would have been a problem if I was working with a few files here and there, but we're talking thousands of files. I probably should have moved the files in small groups ... for some reason thoughts like that always come to me too late to be of use.

I've seen posts from Macromedia personnel stating that for large sites a server-oriented solution would be better than using DW for templating and file management. Unfortunately I'd have to agree.

It'd probably be a good idea to look into using more includes and such to simplify the link cache, but this has problems of its own. I'll have to consider what options we have as I work on restructuring the site over the next few months.

(I guess I should mention that changing to a file share for the remote site does improve the transfer process considerably. DW still has occassional connectivity problems, however, when a large number of files are being moved.)

Monday, April 18, 2005

Trials and Tribulations ...

... with SSI development resolved.

Whew, nothing like a little back-of-the-envelope note-taking to help solve a problem. On the way home on Friday I was thinking more about the issues I had been experiencing in trying to get the suvey code to work regardless of whether the use accesses HTML- or ASP-based pages. I came up with a pretty good solution. It's a little bit of hackery, but we take what we can get sometimes.

Now in addition to the #exec SSI directive for HTML-based pages I have added some ASP code which executes the survey script in the case of the user accessing an ASP-based page. A liberal dose of commenting helps ensure that the browser doesn't display any of the underlying code and mess things up. Here's what's in the menu include file:
<!--#exec cgi="/includes/surveypage.asp"-->
<!--
<%Server.Execute("/includes/surveypage.asp")%>
<!-- -->
The ASP file also needed a minor modificaiton, I added an open/close comment block at the top of the output. A little more detail ...

In the case of HTML-based pages we need to use the #exec SSI directive to execute the survey script. In the case of ASP-based pages the #exec is worthless and so we add an ASP block that uses the Server.Execute statement to execute the survey script. On an HTML-based page the ASP code is sent to the browser and shows up as regular text. The comment tags surrounding the code ensuring that the browser does not display it.

On ASP-based pages, the #exec is sent to the browser as-is and treated as a comment block. The ASP code executes and displays the survey. I included the comment tags at the top of the ASP output so that the initial comment block is closed. This ensures that the survey is not contained inside a comment block (which would defeat the purpose of all this hackery). The extra comment open tag on the inlude file ensures that the closing comment tag at the end of the code block isn't orphaned when a user accesses an ASP-based page. An orphaned closing comment tag would be interpreted as regular text by the browser. Since you can put just about anything inside a comment tag the extra opening tags don't break the code.

One note, before just trying a good ol' ASP delimeters I had tried the <script runat="server"> </script> method of running the ASP. This doesn't work very well. The output is placed at the very bottom of the document and not inline where the script tags are located. I'm not sure if this is due to a combination of factors affecting this particular usage or because it's the default output method for script tags. Either way it's something to keep in mind for future development.

Friday, April 15, 2005

Trials and Tribulations ...

... with SSI development (related to my previous post on cross-browser development).

Same project, different problem. I've pretty much completed development of the survey itself, but found myself with another problem in return. The form is included on a page by way of a #exec SSI command. This for a couple of reasons:
  1. I need to be able to limit the pages on which the suvey loads;
  2. I need to be able to prevent someone who has taken the survey from taking it again
Naturally I needed to use a little scripting to accomplish the above goals. Since I have SSI enabled on the site I figured an SSI #exec would be the easiest way of accomplishing my goal. The other way would be to use a combination of an externally referenced javascript and server-side scripting (namely ASP) to dynamically set up the form display. Well, it's nice not to use client-side scripting when not necessary so I went with the SSI.

The setup is like this: Pages on the site #include the left-side menu where the survey is placed. This menu does a #exec with a script that writes out the survey (gotta love nested SSI). The problem is with pages that are ASP-based. Since ASP does not support #exec the survey script is not run and therefor the survey not loaded. The main problem with this situation is that the supporting structure does not load, which then breaks the script that controls the left-side menu, causing the to not display correctly.

<rant> The AAAS template uses a horrible structural setup and complicated javascript, making fixing problems like this a real pain. Some day I'll get around to developing a better template using CSS and the DOM. </rant>

Anyway, one way around the problem is to place the supporting structural markup around the #exec. The survey still won't load correctly on ASP pages, but at least the menu isn't broken. I think it may be the best (and maybe only) option at this point. I don't forsee needing to have the survey appear on any ASP pages, but you never know so I'd like to find a way to get it working if I can. Additionally, there are all kinds of CSS quirks I have to deal with on top of the other problems.

Mass Mailing

The March/April 2005 issue of 2061 Connections was sent out Friday 15 April, 2005, at 5:20 PM EST to 3620 recipients.

Thursday, April 14, 2005

Trials and Tribulations ...

... in cross-browser development.

Working on a form for a per-page survey (was this easy to find, yadda yadda) to compliment a general site survey. One of the (many) problems with the current AAAS site template is that it caters to older browsers such as Navigator 4.x. A truly outdated browser and difficult to develop for on the best of days (especially when compared to the beauty of CSS, JS, and DOM development on Mozilla). So the site uses tables nested in tables something like six layers deep for the main content and navigation. Then the secondary navigation is repositioned using JS. What fun!

So I felt it would be nice to have the survey show up in the left-hand column under the navigation. A fairly obvious location. The problem is that Nav4 doesn't like forms in positioned DIVs. Putting a form inside a positioned DIV pretty much breaks the Nav4 DOM. Not a pretty sight.

Anyway, to get around it I just decided to use a little bit of hackery known as the browser check. I hate to do it because it's a crutch for bad coding, but I was really left with no other choice. Honest! Anyway, I whipped up a quick regular expression to test the browser string against. It's not pretty, but it gets the job done:
Mozilla/4.\d\d? \[..\]
This one required a little more fudging since IE uses the old mozilla compatibility string (thank you Microsoft). Spent about a day trying to figure out a way to get things working for Nav4, but in the end there's not much you can do. If there's a match the form won't be shown, but I don't feel too bad because our Nav4 userbase is miniscule.

Monday, April 04, 2005

JavaScript and Anchors

I've been doing a lot of reading on JavaScript relating to various scripts on which I've been working. One thing that's come up that's interesting, annoying, and problematic all at once relates to HTML element IDs and page anchors (named anchors).

Interesting
According to the the W3C spec an element ID should also act as a page anchor. (Note: this is based on a small amount of reasearch so my interpretation may be flawed.) This is a very nice feature of the spec that fits in well with my show/hide script. Based on limited testing it appears that modern browsers support this.

Annoying
Older browser do not support this feature. This means that for legacy browsers (such as Netscape Navigator 4.x) I also have to provide the named anchor. This isn't really a problem, but it would've been nice to be able to just use the element ID.

Problematic
While IE may support this feature it does something related that's a bit annoying. It provides the reverse functionality: every named anchor on a page also receives a matching ID. I'm not sure if this is something that's addressed in the spec. This feature is annoying because it interferes with the scheme I was using for the show/hide JS.

I was hoping to modify the show/hide JS so that implementation would be less involved. Esentially the changes I've made would require only that the show/hide content be contained within a DIV that has a class of "shContent". (Note: I have yet to hammer out how to handle show/hide image cues.) The IE effect requires a little more care during implementation because the backup anchor tag has to be within the DIV or else the wrong element will be on the receiving end of the show/hide and the script will not work correctly.

In addition to this minor annoyance, I'm not sure how this affects IEs DOM. IE (and other browsers for that matter) appear to function just fine even with duplicate IDs in a document, but it seems like that's something that should cause an exception. This is likely something that could cause problems down the road.


Update 2005-04-27:
I just glanced through the W3C spec and it appears that IE interprets things correctly, ::gasp:: who knew!

Basically, the ID attribute and the name attribute (when applied to an anchor tag) inhabit the same name space. Since the ID attribute must be unique, you can't have a named anchor that matches another element's ID. Additionally, anchors must not match when a case-insensitive comparison is made (another thing I hadn't ever considered).

Though things are functional at present it looks like I'll need to work on making my code a little more kosher with respect to the spec.

Thursday, March 31, 2005

CCMS Web Site

A number of updates have been made to the site. Mostly content, so not much to speak of here.

I did have to modify the show/hide script. There was a but where div and span tags would be hidden by the script if they were contained within a shContent section. I had to modify the conditional that checked the class type of the tags. It was only checking for one condition and if the conditional failed that test blindly applying the opposite. I changed it so that the conditional tested on both possible states.

Regarding that same conditional, I should change it so that the class value is search for (e.g. using indexOf) rather than doing an outright equality comparison. Something to keep in mind for future upates.

RSL:PD Online

Going up tonight.

Wednesday, March 30, 2005

Prototyping

Spent two more days working on the prototype for FM. Mostly cosmetic and fixes, but a few JS functionality fixes and additions as well.

One problem was that on documents that use the show/hide JS functionality a link from an external page to a page anchor would not work correctly. The reason was that the page was being loaded and the anchor location noted and jumped to prior to the implementation of the show/hide functionality. So the browser scrolls to the location of the anchor before the show/hide sections are hidden. Once the show/hide sections are hidden the document length changes and the anchor will change position. However, the browser itself has finished navigation and so the anchor will move beyond the top of the screen.

I overcame the problem by using the findYPos function from PPK's QuirksMode Web site to locate the anchor after the page has finished initializing and then using the window object's scroll function to put it into view.

One other modification deserves mention if for no other reason that the functionality could prove useful in the future. I needed a method of having a show/hide section be shown if a user visits the page via a URL that has an anchor pointing to the section. I had neglected to make the show/hide code more modular and so would have had to rewrite it in order to accommodate this fact. Rather than spend the time doing that I found a method of simulating a user click on the link associated with the section in question. Now when the page loads initially the script execute the following code, which simulated the link click:
if (sh_bw.ie) {
objTags[i].click();
}
if (sh_bw.dom && !sh_bw.ie) {
var e = document.createEvent('MouseEvents');
e.initMouseEvent('click',true,true,document.defaultView,1,0,0,0,0,false,false,false,false,0,null);
objTags[i].dispatchEvent(e);
}
objTags is a document.getElementsByTagName('a'). The first part of the code is the IE specific code to perform the click (Does it also support the standard code? I don't know.). The second part is the DOM method supported by Gecko which I found via Google Groups (thanks Mike W.).

Blog Archive