Tuesday, November 11, 2008

IIS FTP for multiple users

If you ever got stuck getting a specific user to have access to only a specific folder using IIS FTP then read Mastering IIS FTP - Part 1 - Redirecting Users. The gist that works for me: your username should correspond to the virtual directory under list of FTP virtual directories and it would pick-up and redirect the user appropriately as per the folder setting of that virtual directory.

Friday, October 31, 2008

Image rotating in Drupal

Here's the poor man's image rotating approach for a Drupal site (this simple technique would work for other sites as well, provided you can generate a random number betwen X and Y). It is a very simple CSS-based with some random number generation for creating a different CSS class name for the header image.

NOTE: I'm displaying images as a background of a #header DIV.

Place some images inside your Drupal's current theme's images subfolder (that should be /[drupal root]/sites/all/themes/[your-theme-name]/images/headers/). I've placed mine in the subfolder headers where I keep all header images.

Its probably a good idea to name them similarly - header1.jpg, header2.jpg, etc.

Edit your stylesheet for the site and add one line per every image you have in the images/headers/ subfolder:
{ background:url(images/headers/header1.jpg) no-repeat;}
{ background:url(images/headers/header2.jpg) no-repeat; }
// ... add more with the same pattern
Edit your page.tpl.php and generate a dynamic class name for the #header DIV (which displays the header image), using a random number generator with a new CSS class name. The resulting code would be:

<div id="wrapper">
srand ((double) microtime( )*1000000);
$random_number = rand(1,10);
<!-- start #header -->
<div id="header" class="header-<?php echo $random_number; ?>">

NOTE: I'm using rand(1,10) to create random numbers between 1 and 10. Adjust as appropriate for your setup.

That's it. Once you open the page you'll get the #header DIV to have class="header-1" ... until class="header-10" in, hopefully, random manner.

Tuesday, October 21, 2008

ASP.NET 2.0 useful formatting tips

This post applies to ASP.NET 2.0, using VB.NET. It contains some useful tips on dealing with data binding syntax and formatting of data.

Conditional boolean field formatting in a GridView
<asp:Label ID="Label3"
Text='<%# IIf(CBool(DataBinder.Eval(Container.DataItem,"hasPassedFlag")) = true, "Pass","Fail" )%>'>
Parsing decimal numbers before SqlDataSource fires update/insert

Sometimes, the UI shows a number in ###.## format but the underlying culture info has ###,## for the decimal format. If left untouched, entering a ###.## would get interpreted as a much larger number in the context of the culture info that expects ###,## decimals.

To avoid problems, intercept the inserting/updating event on the form view and manipulate the decimal value there, setting the appropriate value into the SqlDataSource parameters.

Sample definition of the SqlDataSource with its parameters:
<asp:SqlDataSource runat="server" ... >
FormField="fldAmount" Name="fldAmount" />
... // similar for InsertParameters
FormView uses the above SqlDataSource as the DataSourceID and contains a TextBox bound to the fldAmount:
<asp:TextBox Text='<%# Bind("fldAmount") %>' ID="fldAmount" runat="server"></asp:TextBox>
In the code behind, handle the ItemInserting/ItemUpdating events of the FormView. First, deal with ItemInserting:
Protected Sub form1_ItemInserting(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.FormViewInsertEventArgs) Handles form1.ItemInserting
Dim tmp As Decimal
tmp = Decimal.Parse(e.Values.Item("fldAmount"), System.Globalization.CultureInfo.InvariantCulture)
e.Values.Item("fldAmount") = tmp
End Sub
Similarly, ItemUpdating (notice the e.NewValues)
Protected Sub form1_ItemUpdating(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.FormViewUpdateEventArgs) Handles form1.ItemUpdating
Dim tmp As Decimal
tmp = Decimal.Parse(e.NewValues.Item("fldAmount"), System.Globalization.CultureInfo.InvariantCulture)
e.NewValues.Item("fldAmount") = tmp
End Sub
That should do the trick. The form value for fldAmount would get appropriately parsed and then via data-binding that's already in place would get properly updated in the database.

Thursday, October 2, 2008

Recipe 49 gotcha (from Advanced Rails Recepies - 84 New Ways...)

Advanced Rails Recepies suggests in recipe #49 to avoid specifying IDs for fixtures when you're trying to express has_and_belongs_to_many relationships and instead just refer to the name of the instance in one of the fixtures. Looks wonderful.

I had some number of fixtures and just wanted to introduce this by adding a completely new fixture file and specifying the relationship as suggested. The old files that I had I modified by adding the relationship reference.

For example, I had a simple users.yml which had:
id: 1
name: jack

I has_and_belongs_to_many :roles for user. My roles.yml fixture was:
id: 1
name: role1
So I tried adding a reference to the role1 in my user1 like so:
id: 1
name: jack
roles: role1
The test failed when I tried verifying that user1 contains role1. It appeared that the connection was never made. I did notice in the database that a record in tables roles, users and roles_users was added but (!) the ID for roles_users.role_id was this weird big number that was obviously the reason why it appeared that the user did not get the role required.

To fix this, just remove the ID from those instances inside your fixture files that you'll be "connecting" through that nicer named-reference approach.

undefined method url_for in rails functional test

When attempting to use url_for(options) method (which works fine in controllers and views) inside a functional test (Rails 2.1.1) I'd get 'undefined method url_for...' exception. This was a standard functional test so I thought these things would be there.

To fix it, inside your functional test class definition do something like this:
class GroupControllerTest < Test::Unit::TestCase
# Note: to make sure url_for works in a functional test, include the two files below!
include ActionView::Helpers::UrlHelper
include ActionView::Helpers::TagHelper
I wonder if there's a nicer way to do this?

Wednesday, October 1, 2008

Issues with purging Client variables data store in ColdFusion

In short: CFMX seems not to do much regarding database purging of client variables. You'd probably need to do manual purge of data from these tables to avoid funny application errors (if your application uses client variables).

Here's the long story:

We have been using client variables on our CFMX7 project for quite some time. They are stored in a database (MSSQL). CF admin is configured to purge these at 48 hours intervals. Since it worked fine for many months I never bothered to check whether the purge actually happens until we've encountered some rather weird problems recently.

The basic issue was that emails that CF would generate would no longer be sent. A day before they were running just fine. Looking at exception.log file I'd notice that there were lots of Smtp-related authentication problems of some sort. Mail sending process reads client-scoped variables for username/password to be used for email sending (they are read from the application database and then stored in client variables while this process runs). I've verified that the mail server and the application settings are all good - they use proper username/passwords and the rest of it.

The log file of the application nicely says that the emails are generated for sending and they are queued for sending. On the mail server side I see it complains that the attempt to send the mail is not authenticated and we need to reconfigure our client sending application.

Finally, when I looked at the [CFMX ROOT]/Mail/Undelivr folder where unsent mails are stored, I spotted (at the top of the mail file) the username+password+mail server string that typically looks like: username:password@servername:port. In the case of undelivered mails they were showing some really old email settings + password (I later verified that the password changed for this other email account which would explain why sending stuff via that email account would not be properly authenticated).

The only place where these values come from are the client variables and it dawned on me that we could be using some old client variables once I saw that the database tables where they are stored had around 90,000 rows in them. Looked like no one was purging the database and I'm guessing the unique identifier (CFTOKEN) must have been duplicated at a later time and ended-up reading values from way before.

After deleting all the records from the client database store things started working again.

Monday, September 29, 2008

Verity K2 server hangs on CFMX

I came across a situation when submitting new content for reindex or removal from a search collection simply does not do anything. The process runs forever and there's no activity by k2. I'm using CFMX7.0.2.

When this happens, check your search collection folder called trans. There is a file called data.trn in which k2 writes the commands that it is supposed to execute. I think it is sort of a batch file of all commands issued for a given collection.

A typical example of what it contains:
LAST LOGCHECK "-989436216"
LAST CLEAN "-989436216"
I noticed that from time to time I'd get some weird looking file writing/permission errors in the various k2 log files (found under [VERITY ROOT]/Data/services/ColdFusionK2_indexserver1/ folder). Whenever these happen, data.trn still shows the commands that were issued earlier. As more and more commands are issued it would keep on growing this file and adding new commands to it. But the real problem is that the first command that caused the failure is probably still there and for whatever reason cannot be finished.

So what to do? The easiest thing is to simply edit the file and remove all the lines below the 4th line in the example above - that's what a 'clean' data.trn contains. I do that after turning k2 off in the services (just in case). After this, indexing process tends to go back to normal.

Thursday, September 18, 2008

Upgrading to Rails 2.x

Here's a short (?) list of issues encountered when upgrading from Rails 1.2.6 to Rails 2.x (in my case it was Rails 2.1.1).

  • first thing is run that rake task for deprecation (place it in your lib/tasks as deprecated.rake and then run rake deprecated). It should give you a few things to change
  • Change @request, @session[], @params[] by removing @ sign in front of them.
  • Change <%= start_form_tag ... %> to <% form_for :action => '...' do %>
  • Change <%= end_form_tag %> (where the above start_form_tag appears) to <% end %>
  • Change find_first(["..."...]) to find(:first, :conditions => ["...",...]) (Note: I believe rake deprecated did not catch all of these initially!)
  • Remove model directive from your controllers
  • Follow the rake task deprecation list until all is done (I did all except the paginator at first since I wasn't sure what to do)
  • Install rails with dependencies (gem install -y rails). I got Rails 2.1.1.
  • Update gem (gem update --system)
  • In your rails application root you'd need to run rake upgrade (emm.... my memory fails me right now - I think its that?). This sort of updates the config/... files from what I gathered - namely the config/boot.rb as well as public/javascript files (I notice that prototype is after the upgrade).
  • Update config/environment.rb change RAILS_GEM_VERSION = '1.2.6' unless defined? RAILS_GEM_VERSION to RAILS_GEM_VERSION = '2.1.1' unless defined? RAILS_GEM_VERSION (or whatever your Rails 2.x version is)
  • If you used observers in your controllers remove them from there (look for observer :your-model-name_observer in your controllers)
  • Add observers into config/environment.rb (inside Rails::Initializer.run do |config| section add config.active_record.observers = :your-model-name_observer)
  • Update act_as_versioned plugin with the latest from git (if you don't change it this plugin apparently completely breaks the activerecord setter and you'd get NoMethodError on normal model object setters, especially with ajax in-place editor)
  • Add config.action_controller.session = { :session_key => "_your-app_session_", :secret => "random security string" } to the same config/environment.rb section as above. You can generate a big nice security hash string by doing rake secret.
  • I'm using old paginator (found in your typical Rails book) so I installed the classic_pagination plugin to make it work again (yes, yes, will replace it once I get the app running again)
  • I had a patch of some sort for in_place_editing (sorry can't remember where I got it or what was it but I know it was actually useful - I think relating to editing blank/empty data). Remove that altogether (it was in my lib/extensions.rb - things like ActionView::Helpers::JavaScriptMacrosHelper.class_eval do ...).
  • Install super_inplace_controls plugin (ruby script/plugin install http://super-inplace-controls.googlecode.com/svn/trunk/super_inplace_controls) as a replacement for your normal in_place_editor. Add to your layout file: <%= stylesheet_link_tag "in_place_styles" %> as well as somewhere inside the content area of the layout file a placeholder div for errors with ID inplace_error_div.
  • Change views which used the old style in_place_editor's <%= in_place_editor_field :rawdata, :col1 %> with <%= in_place_text_field :rawdata, :col1 %>
  • Leave the in_place_edit_for :model :field directives in your controller as they were
  • I changed ActionMailer::Base.server_settings = { ... in environment to ActionMailer::Base.smtp_settings = {.
  • One last note: I have other applications running on Rails 1.2.x which I still want to have so make sure you do NOT remove old gems that these might be removing. I believe running gem clean would do that so don't do it unless you want to jump into fixing all the other applications right away)
Wouldn't it be nice if upgrades were not a big deal/problem?

I've later discovered a problem with mongrel - it simply won't run as windows service. I'm using mongrel_service for this. The error shown is something along:
c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require': no such file to load -- c:/ruby/
lib/ruby/gems/1.8/gems/mongrel_service-0.3.4-x86-mswin32/lib/mongrel_service/init.rb (MissingSourceFile)
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `require'
from c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require'
from c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:355:in `new_constants
from c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require'
from c:/ruby/lib/ruby/gems/1.8/gems/gem_plugin-0.2.3/lib/gem_plugin.rb:134:in `load'
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/source_index.rb:203:in `each'
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/source_index.rb:203:in `each'
from c:/ruby/lib/ruby/gems/1.8/gems/gem_plugin-0.2.3/lib/gem_plugin.rb:112:in `load'
... 11 levels...
from c:/ruby/lib/ruby/gems/1.8/gems/rails-2.1.1/lib/commands/server.rb:39
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require'
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `require'
Turns out that the fix is pretty straight forward: change your gem folder name of mongrel_service (should be under your [ruby root]/lib/ruby/gems/1.8/gems/) from mongrel_service-0.3.4-i386-mswin32 to mongrel_service-0.3.4-x86-mswin32 (I suppose your version could be different). Thanx for te tip from this post.

Saturday, September 13, 2008

VS2005 web site publishing problems

When trying to do a Build > Publish Web Site from VS2005 of a file-based (!) Web Site project I'd get the following error: "You must choose a publish location that is different from the source Web application".

My web site project is called HRWeb. Its file-based which means that the source folder sits somewhere other then IIS' wwwroot/HRWeb. In fact, when opening the publish dialog box, VS2005 pre-fills a default location to a subfolder of where the solution file sits (something like /aspprecompile).

I wanted to publish the project to IIS on my machine, so tried http://localhost/HRWeb and got the above message. I did a similar thing elsewhere and managed to publish to a remote server with the same /HRWeb path portion, so its only on the local machine.

My work-around was to have the publishing directory something (anything) else, as long as its not under the /HRWeb (putting HRWeb2 for example does not work). So, I've put the publishing option as http://localhost/HR/ and that did the trick. Obviously I then had to manually copy the files from that directory into the one I wanted.

After doing this there was another problem. Accessing one of the sub-pages would show an error such as: "BC30456: 'Theme' is not a member of ...". Please note that the application works just fine while I develop things using VS2005 in-built web server.

To get around this I actually went back to the publishing to the local folder first on the file system and then manually moving the files over to IIS. I tried with the options on the publishing dialog ("Allow this precompiled site to be updateble" and "Use fixed naming and single page assemblies") but they made no difference - the error would persist.

All these deployment troubles came-in when my client wanted to do the publishing to the remote server from VS2005 (I was hoping he'll end-up doing an installation via the installer OR at least via the NAnt script that I had). Which now prompts me to go and figure out how can I get my NAnt to deploy things remotely instead of assuming a localhost deployment.

Friday, September 12, 2008

Rails and the plugin trap

I'm a big fan of Ruby on Rails (RoR). I've started doing some concrete projects with RoR a few months back and its been really a joy (well, almost all of the time).

With my current project I am trying to setup an access control list (ACL) security environment. First thing to do: check the plugins available. True enough, there were a few already out there that covered the security aspects. Sadly, half seem to be on Rails 2.0 (I'm still on 1.2.6), a few do not have the object-instance permissions that I was looking for and the reminder was impossible to actually run. So I wasted (?) quite a few hours just trying to find out about each one of them and then trying to put it in action. After much frustration I asked myself whether I really need an elaborate functionality that is allegedly offered by the plugins (both 1.2.6 and 2.0) or I can do away with something simpler. As always, I could go with a lot less. So I figured to scale down my hopes on this bit for now and just try implementing a simpler security scheme inside the application itself.

What really frustrated me was the fact that I'd spend sometime trying to learn about the plugin only to somehow catch the fact that you need RoR 2.0 for it (honestly, it really ought to be more visible). The other one is when you actually get the thing up, create the tables, etc and then find your unit tests (that are pretty much a copy from the readme file of the plugin) not working and spitting-out some really weird problems.... eventually I got tired of it and gave up...

App_Code in VS 2005 and web-related projects

I must be doing something terrible wrong since my VS2005 web application does not "see" the public enums/classes inside App_Code.

I tried adding a ASP.NET Web Application Project (VB) to an existing solution. I then manually created App_Code folder and placed a file which defines (without a namespace) a class and an enum. No brainer. Then I added a web service called HR.asmx and placed:

&lt;%@ WebService Language="VB" CodeBehind="~/App_Code/HR.vb" Class="HR" %>

I've added HR.vb into the App_Code.

Trying to build the web app project fails complaining it does not see these classes within the App_Code folder. I have another web site project in that solution and it builds with the exact same setup in App_Code. The obvious differences that I can see are:

  • The new one contains a Project file pointing to .vbproj file (no file path)
  • The old one has a "Full Path" and the project solution explorer name actually contains a path on the disk (something like E:\...\blah)
  • The old one contains a section Developer Web Server for project properties.
  • The old one has an option for Property Pages
  • The new one has a Properties (which opens up, among other things, a Web section where deployment via IIS/personal web server are possible)
  • The new one keeps offering a "Convert to web application" option through a right mouse click
  • The old has Build Web Site... and Publish Web Site... right-click menu options (vs a normal Build/Rebuild and a Publish... on the new Web app project)
This is what my solution explorer shows. Notice that the iconss are slightly different + the name of the first one starts with E:\...

It looks like that I've created something other then the "web application" project since my old project simply does not have these options that I see on the new one.

All this effort was due to a deployment problem I had. My customer happily uses VS2005 for project deployments. I've been trying to get my NAnt/CruiseControl to build and deploy (it works on my end of course), however, after talking to the customer I wanted to get the VS2005 to do its "Publish Website" option, thinking that it really ought to work just fine. It didn't and now I see that there's something that I did not do properly.

Any ideas/comments?

Tuesday, July 22, 2008

Fighting / finding SQL injection attacks

There's a first of everything.

One of my client's websites has been successfully hacked. An SQL injection using ASCII-encoded binary string containing SQL statements. I think we were lucky as the script added a reference to a javascript file that is to be loaded. Lucky (?) for us, having these additional markup elements appended to various text columns ended-up completely breaking site's design and after some investigation it became obvious that there was something going on.

A useful tool that helped me identify where the problem was (actually, 2 problems, but one got exploited!) is Scrawlr, helping pin-point a piece of code that was vulnerable. Wish I new about this tool before. Got it via (of all things) this article on SQL Injection on Wikipedia.

And the obvious advice: do pay attention and try to use SQL params in your SQL queries rather then just dump stuff directly from your URL / form parameters. Might save a few hours.

Helps to have IIS/Apache logs available.

Saturday, May 24, 2008

ColdFusion Client Variables Voes

I've been faced with ColdFusion instability on a project for sometime. Frequent, daily restarts of ColdFusion have been happening (for months) and I never could figure out why. Recently, I've noticed these nice crash dumps in CFMX-ROOT/runtime/bin which complained a lot about Client variable persistence storage (from what I gathered in the stack traces).

The site uses MS Access as a database for storing client variables. I don't think there's lots of client variables in the application itself, but they are used regularly throughout. Anyways, client storage was earlier set to registry and my registry size went through the roof (took me sometime to dig out a post on cleaning the registry manually). I've changed it to a database storage mechanism (don't ask why Access is used) and it worked fine after that (sort of).... I think it did.

Long story short: I tried using cookies as a client variable storage. No more crashes. Its probably too early to tell. I tried switching back to database storage and trashing the application a bit to quickly reproduce another crash of the server. With cookies the same thing did not crash the server. So perhaps there's something there.

This was CFMX 7.0.2 with Win Server 2003.

Sunday, May 4, 2008

Verity K2 and indexing files

For whatever reason it seems that trying to index a fairly large number of Excel files with Verity K2 (CFMX 7.2) seems to take forever. CPU is super-busy and the collection size is not really changed that much even after waiting for more then an hour. On the other hand, indexing smaller directories is a lot faster - 200 small Excel files are done in about 2 minutes or so (collection is still well below 100,000 documents).

Considering how fast database records were indexed (I had about 2-3 minutes per 10,000 database rows) it just seemed like something was not right. Anyways, chunking those 20,000+ Excel files into sub-directories led to below 30 minute total indexing time. Tip of the day: don't try to index such a large number of excel sheets at one go, instead go with smaller directories.

Monday, April 28, 2008

CSS conditional comments hacks

It seems impossible to run away from chasing display problems in IE7/6/5 vs Firefox. A wonderful way of dealing with issues (well maybe not "wonderful" but at least seems to work) is to have conditional CSS files depending on the browser. Read all about conditional comments to load per-browser-version specific CSS files.

But, there's a problem. I have IE7 in WinXP and I've downloaded an old version of a standalone IE6 from evolt.org. IE6 works fine (behaves as IE6 when troublshooting issues) but interprets conditional comments for the hack as if it was IE7. In other words, I can't use my local development box to see if conditional comments and CSS delivered through that will work. Hopefully will sort this out and post an update.

Thursday, April 24, 2008

When your testing environment is not cutting it

I was so surprised the other day when I discovered that despite the whole kitchen-sink testing setup (NUnit, selenium, Cruise Control) part of my application was using a fixed connection string in the TableAdapter I've created using VS2005.

I had a connection string in the Properties > Settings (called ConnStr) but I never noticed that its fully qualified name was something like ProjectX.My.Settings.ConnStr. When I added a ConnStr to my connectionStrings section of web.config I thought that all would be well. Obviously everything was working since I kept on testing on the same machine on which I develop and hence my development database was there.

Anyways, I wanted to see whether an installer would work properly so I started a VMWare Win 2003 server instance and discovered that the application was trying to connect to my development database on the server. That was a bummer...

OK, so I admit that it must be that one or two of my tests are definitely not up to speed since they ought to have been able to catch this, but I kind of discovered what a great thing a completely dedicated testing machine is. Hm, no wonder people keep on stressing that.

Thursday, April 17, 2008

Drupal Invalid argument supplied for foreach error

I've been getting this really weird error and could not track it for sometime but have finally managed to figure out a way to get rid of it, so perhaps others might find this useful too.

Summary: it was caused by a call to node_load() method passing nothing as an argument. So check if you have some code that assumes that an ID of a node is present and calls that function!

Long story:
First of all, my page would show in the central area a warning message like this:

* warning: Invalid argument supplied for foreach() in C:\Program Files\Apache Group\Apache2\htdocs\mcsonline\modules\node\node.module on line 521.
* warning: implode() [function.implode]: Bad arguments. in C:\Program Files\Apache Group\Apache2\htdocs\mcsonline\modules\node\node.module on line 525.
* user warning: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1 query: SELECT n.nid, n.vid, n.type, n.status, n.created, n.changed, n.comment, n.promote, n.sticky, r.timestamp AS revision_timestamp, r.title, r.body, r.teaser, r.log, r.format, u.uid, u.name, u.picture, u.data FROM node n INNER JOIN users u ON u.uid = n.uid INNER JOIN node_revisions r ON r.vid = n.vid WHERE in C:\Program Files\Apache Group\Apache2\htdocs\mcsonline\includes\database.mysql.inc on line 172.
Searching around led me to some other people complaining about the error but the major difference is that there was usually a hint in that the error would refer to a specific module. In my case, however, it was talking about the core node module! Not much help there.

First thing to do: try and figure out where is the error coming from. :) OK, so I suppose that's an obvious one. But honestly I was stuck a bit wondering where to start. So one easy way is to start turning off blocks until you can see that the error disappears. So this led me to a specific block which was actually a view (I'm using a module to display views as blocks which I know can't remember the name).

Next, after realizing that there was nothing interesting about the view since it just did a List View of a couple of CCK fields and a title for a user-defined content type I was off to search elsewhere. Since my memory does not serve me that well it took me a while to remember to check template.php since I could be overriding the list view (using the phptemplate_views_view_list_VIEW-NAME($view, $nodes, $type)). So there it was - a small snippet getting the view in question to render through a template.

Now the really really weird thing was that the view was showing the content - I saw a single item that I was supposed to see but I get the warning message as well (hm, now I realize its a "warning"). So it meant that the method did something wrong partially but managed to proceed.

Some context here: a day before I was actually removing some CCK fields from content types and I had this author field that I've removed for the (now problematic) event content type.

In the function that I had there was a line like:
$auth = node_load($targetnode->field_autor[0]['nid']);
I did this regardless of whether the author field was there or not! (hm, this is supposed to be a small helper method used when rendering list views so ...).

Doing a check if the field is there before calling node_load did the trick! Yay.

Beware of var_dump in Drupal nodes

Just had a scare when my Drupal site started showing weird stuff half-way through printing a view. Turns out that I left commented var_dump($node) in the code and the contents had an "appropriate" mix of comments and HTML that cause the comment to break and spill-over into the page.

Lesson learnt: keep them template files nice and tidy and leave the debug info for the development server. Hm, like that's new anyways? :)

Friday, April 11, 2008

Testing with old browsers

Corporate policies could be a nightmare at times. As much as I understand the hassle of changing browsers and adjusting the applications to work with them I cannot appreciate folks who use IE6, refuse to use IE7 (which is, after all, a better beast then its predecessor) and don't want to start using Firefox.

My development machines are all upgraded to IE7 (XP still, though) and I now have a need to test against IE6 which is exclusively used for my current project. Well, luckily you can get old browsers from http://browsers.evolt.org/. Installed it (well unzipped it, really) and seems to work (showing my messed-up UI as expected). About Us from Help incorrectly displays version 7 dialog box. Ah, well...

(Updated 23 May 2008):
One thing that does not work is the CSS hack through HTML comments. You probably know of being able to use a conditional inclusion of a CSS file by using IE-only conditional (


Blog Archive