Vaadin Changes I Want To See

• Shorten "?restartApplication" URL trick to "?restart"

'restartApplication' is a lot of typing.

• Clear the "?restartApplication" from the URL automatically.

As warned in the Nicolas Fränkel book, after using that URL trick to restart your app you should immediately clear the trick. Otherwise, the app will continually be restarting on every refresh, rendering unrealistic behavior.

• The "getTheme" method on "Application" class should return the default theme rather than null.


Extensions in Postgres

Upon installing the new Postgres 9.1, I was surprised and stumped when I went to activate the UUID features, as I’ve done for 8.4 and 9.0.

Using the Mac installers provided by EnterpriseDB, I knew the UUID generation and handling features to be bundled but not activated. In the past I’d learned to open this file:
and copy-paste to execute the pre-written SQL commands defining Postgres functions. For example, this fragment defines a function to generate a Version 1 UUID:

AS '$libdir/uuid-ossp', 'uuid_generate_v1'

In 9.1, I could not find that "uuid-ossp.sql" file. In fact all but one of the files previously in that "contrib" folder are gone in 9.1. I found some similar files in another folder  named "extension":

After some googling, I discovered that “extensions” are a major new feature in 9.1. The idea seems to be making it easier to install and track a bunch of functions or other pieces as one collection rather than separate pieces. Makes sense, as we rarely want to install just one single piece.

This Extension feature is new as of 9.1 as seen on the Feature Matrix, and is discussed in the What's New wiki page. The 9.1 press kit provides a definition:
While PostgreSQL has always been extensible, now users can easily create, load, upgrade, and manage any of dozens of database extensions using the EXTENSION database object. The new site PGXN.org offers a repository for contributing and downloading extensions.
To see what extensions are currently installed, try this SQL command:
select * from pg_extension;
On my newly-installed Postgres 9.1, I see 'plpgsl' and 'adminpack' as the only two installed extensions. See the doc.

To see available extensions bundled with your Postgres and ready to be installed, try either of these SQL commands:
  • select * from pg_available_extensions;
  • select pg_available_extensions();
For example, I see “uuid-ossp” as the 46th item.

To install ("load") an extension, use the SQL command CREATE EXTENSION. While that command may seem like a misnomer, I suppose it makes sense if you think of it as creating an object within Postgres' world that represents the content of, or access to, the items in the extension. Anyways, to install the UUID extension, execute this SQL:


I found the quote marks to be required, despite the example in the doc.

To avoid re-installing, add "in not exists".


Chapter 35 of the Postgres manual explains how extensions are packaged, if you are curious.

An independent repository of extensions can be found at the PostgreSQL Extension Network.

By the way, to be clear about Postgres’ UUID features, let me say:
  • UUID support is built-into Postgres by default, in that Postgres stores UUID values efficiently in their 128-bit form. While your database driver may choose to show UUIDs as hex strings, the UUID values are actually stored in Postgres as their native bits not inefficiently as a string. So UUIDs are not a hack or an add-on in Postgres, and may be searched and indexed like any other supported data type. No need to add a function or extension in order to store and retrieve UUID values in Postgres – you get that for free.
  • If you wish to generate any of the various versions of UUID values server-side within the Postgres, then you may choose to load the OSSP-UUID extension.


Using UUID Library in Postgres 9.0 and Earlier

UPDATE: This info applies only to Postgres versions earlier than 9.1, such as 8.4 and 9.0. For 9.1, see my later post on "Extensions in Postgres".

If you are using a version of Postgres installed by installers provided by EnterpriseDB, then your installation includes a library for generating UUID values. The library is known as uuid-osspc and is described in this doc. Commands such as "uuid_generate_v1()" return the hex string representation of a UUID’s 128-bit value.

While this library may be included in your Postgres, it is not enabled by default. To enable, you need to execute some SQL commands to create some function definitions within your Postgres database. Fortunately, these SQL commands have already been written for you. You just need to execute a provided SQL script. Find that script in folder such as this one on my Mac:


Either execute that script directly, or copy-paste its contents to pgAdmin’s "SQL" (tab) window. If run successfully, pgAdmin displays a message saying "Query returned successfully". And you should see a bunch of functions listed under your database’s definition, such as YourDatabase > Schemas > public > Functions. You may need to refresh pgAdmin's display by context-clicking on your database in the "Server Groups" listing, to choose the menu item "Refresh".

Actually, in my install of Postgres 9.0.4 I encountered an apparent glitch where the list of functions would not refresh until I added a table. This may be a bug in pgAdmin – I’m not sure.

To try the library, run this SQL:

SELECT uuid_generate_v1();


What's New in Tomcat 7 & Servlet 3

The Apache Tomcat web server and Servlet container has been significantly upgraded in version 7 to now support Servlet 3 spec and many other changes.

For info, check out this video recording of a slideshow presentation, by Mark Thomas, the Tomcat release manager.

An article about the new async processing feature in Servlet 3.

An Introduction to Servlet 3.

The Servlet 3.0 spec (JSR 315).

Jason Hunter seems to no longer be writing his famous “New Features in Servlet X.X” articles. But to see the evolution of Servlet technology, you may want to read his past articles for Servlet 2.5, 2.4, 2.3, 2.2, 2.1. I believe Mr. Hunter was also the author of these Servlet Best Practices, parts 1, 2, and 3. And his article on Servlet Filters.

If you are new to Servlet technology, start with this Wikipedia page.


When Vaadin Seems to Ignore Your Modifications

They first thing you’ll notice when trying the Vaadin 6 tutorial is how little code is needed for an interactive web app. The second thing you’ll notice is that when you modify the source code, nothing happens.

You might add a label to the form, for example, but at runtime the label fails to appear.

This is maddening when the joy of a new app turns into utter frustration. You might even quit Eclipse, restart, run your app, and still find your changes ignored. What’s going on? Well multiple bugs and features are biting.

Restart App

While developing and testing you may need to restart your Vaadin app. Do this by appending the app's URL in your web browser's address bar with:
Another way is to append:
That approach displays a JavaScript window on top of your app’s window with buttons for restarting the app, as well as debugging your window's layout, and other features.

Session Persistance

First, if you are using Tomcat as your web server, you’ve discovered Tomcat's default feature where it persists a session object automatically when quitting. In a conventional app, the session has just a few simple values, probably just small pieces of text. So it makes sense to save those to disk, and restore those values to memory when the session is recreated at runtime. With Vaadin, the session object holds your entire Vaadin app including all the windows and their current state. Upon restarting the session, Tomcat restores your Vaadin app as it was running previously, like magic. Unfortunately, your Vaadin app's "init"  method is not run again. So your changes do not take effect. Makes sense, as the definition of the "init" method is to run when the app first begins, and in this case of Tomcat restoring your session, the app is already running so no need to call "init".

The solution is simple: Turn off the Tomcat feature of session persistence. Doing seems simple: Find and edit Tomcat’s "context.xml" file to uncomment the line:
<Manager pathname="" />
The trick is which "context.xml" to edit. No need to edit the one your Tomcat installation, which would affect all web apps. Instead, copy the "context.xml" from the tomcat/conf folder. Place that copy nested inside your project folder, and edit there:
Now you'll only be affecting this webapp, and not others.

I suggest doing turning off session persistence for every Vaadin app you create. Session persistence and restoration of a Vaadin app is a big deal. You should think through ramifications such as how to handle database connections. Admittedly it would be nice if users found their app in the same configuration as their last work session, but you need to develop and test appropriately.

Missing Pieces

When you run a Vaadin app in Eclipse, you are using the WTP (Web Tools Project) plugins set which handles getting your Eclipse session to find, configure, and run a web server with your web app. Unfortunately, WTP seems to have “issues”, as we say when speaking politely, when running your web app.

What actually happens is that WTP deploys a copy of your webapp to a hidden folder, and then invokes Tomcat. That works fine the first time. But upon successive times, WTP seems to get confused about what to copy over.

A workaround is to clear your web app from the hidden folder. Look for the hidden folder ".metadata" in your Eclipse workspace folder. Then ".plugins" > "org.eclipse.wst.server.core" > "tmp0" > "wtpwebapps". There you'll find your webapp's folder. You may delete it. But you must also delete this file to avoid confusing Eclipse: workspace/.metadata/.plugins/org.eclipse.wst.server.core/publish/publish0.dat

That workaround is tedious to do repeatedly. Another workaround is to avoid WTP altogether. Some folks on the Vaadin forums talk about running Jetty directly within your project, while avoiding WTP entirely. Jetty is built to be embeddable, so this is supposedly pretty easy. But I already know and use Tomcat, and want to be using it for development. And I did not want to learn yet another development arrangement. So I chose not to go that route, though it may a wise one.

Another workaround is JRebel. Some references to this product on the Vaadin forums led me to this remarkable product. JRebel pulls a few tricks to make development easier, especially web app development. It does seem to solve this problem of missing pieces while running within the IDE. I'm not clear on how JRebel accomplishes this, but it seems to be working well for me.

Hot Swapping Classes

Another frustration of web app development is making changes to classes and then having to shutdown and restart the web server all over again. The Java platform has built-in features for hot-swapping classes while still running the app. But for various reasons, those features do not work well with web-servers.

You can shutdown the web server by clicking in Eclipse's Servers panel, select your web server, and click the red square icon to perform a shutdown. But this gets tedious. And I often found myself with problems that seemed to require quitting and restarting Eclipse as well. Not fun.

Solution: JRebel, again. Swapping fresh classes while running is JRebel's raison d'être. I’ve only begun to work with JRebel for a few days, but it seems to be doing this hot-swap trick very well.

I Save File → Eclipse Re-compiles → JRebel Re-deploys

In fact, it works so well, I did not understand how to use it after installation. I made a change to a class, and then went back to the docs to see how to tell JRebel to compile and deploy it. I could not find any such doc. Frustrated, I switched back to Eclipse. There I noticed a fresh line in the console pane, something about SuchAndSuch class has been deployed. While not believing it, I did a restart of my app using the URL trick mentioned above, and voilà, my changed classes too effect. Apparently all I need is to save the .java file! Saving triggers Eclipse to automatically re-compile. The re-compile triggers JRebel to re-deploy of the changed class to Tomcat running via WTP within Eclipse. My fresh class was installed while Tomcat continued executing. Amazing. JRebel has a plugin for Eclipse that provides for this integration.

While not absolutely mandatory for Vaadin work, JRebel certainly makes life easier. They have a free-of-cost “Social” edition for non-commercial purposes while connected on the Internets, so give JRebel a try.

JRebel does not affect deploying your app to production. However, if you want to hot-swap parts of your app in production, LiveRebel is a sibling product designed for that purpose. JRebel is for development, while LiveRebel is for deployment.

Be sure to make the correct settings for JRebel, both when you install JRebel and each time you create a Vaadin project. For JRebel in Eclipse, read sections 2.2, 3.2, and 4.2 of the JRebel Reference Manual. Really quite easy.


To Create a Jar

Creating a .jar file in Java is not as easy as it seems. The flag letters must be in a certain order. Unfortunately, I've not found the documentation for that ordering. This tutorial says the flags may be in any order, but not so in my experience.

After some frustrating trial-and-error, I found the following line works.
jar c0mfv META-INF/MANIFEST.MF whatever.jar source_folders_&_files
To break it down:
  • The 'jar' is the name of the command-line tool
  • 'c' means 'create a jar file'
  • '0' (zero) means no compression
  • 'm' means to use an existing manifest file rather than create one. Omit this to have a manifest auto-generated. 
  • 'f' means send the output of this command to a file rather than to standard output (on the command line).
  • 'v' means 'verbose', to describe each step of the process, displayed on the command line. You can read this to verify, for example, that your pre-existing manifest was copied rather than a new one generated.
  • Pass the relative location of your pre-existing manifest file. Omit if you are omitting the 'm' flag.
  • Pass the name of the jar file to be created. Type the '.jar' on the end yourself. 
  • Pass one or more files and folders to be included. Include the top folder of any packages such as "com" if you have a package such as "com.example.myapp.gui".

Change Directory 

Before typing that 'jar' line, change directory ('cd') to the folder containing the top folder of your package. In other words, you don't jar the folder above your source. From the tutorial linked above, if you have this folder hierarchy:

You would 'cd' into "TicTacToe", and then pass 3 items at the end of your jar comand, for the .class file and 2 folders.


Time Keeps on Ticking, Ticking, Ticking, Into The Future

Here are a few handy links to Date and Time info.

The core pearl of wisdom:
Timezones are a presentation-layer problem!
Most of your code shouldn't be dealing with timezones or local time.


New Adventure Begins: Vaadin

I’m exploring Vaadin, a framework for server-side webapps that require no knowledge of HTML, CSS, DOM, JavaScript, or HTTP. Similar in concept to Real Studio, Web Edition, but built in pure Java. Vaadin is quite promising, as seen in this great demo of their rich collection of GUI widgets, and excellent doc in The Book of Vaadin.

Unfortunately, getting started with Vaadin is almost impossible. After strange initial behavior, and wrestling the Eclipse IDE, I gave up trying weeks ago. I was quite disappointed given how you can have a working interactive web app running in your first 5 or 10 minutes with Real Studio. However, my greatest strength as a developer kicked in: stubbornness. I tried again. And I’ve nearly won this time.

So here’s some pointers for other brave souls venturing into the world of web apps.

Tip 1: Use Eclipse

While Eclipse is my least favorite Java IDE, the Vaadin plugin is designed only for Eclipse. Be sure to get this edition, Eclipse IDE for Java EE Developers. In theory, you could use another edition of Eclipse  and add enough modules to get the equivalent of the EE edition. I tried, I failed. I already had another edition for other purposes (Swing apps). I added and added modules, but never got it to work. Reading the Vaadin forums led me to many answers asking “Did you start with the Java EE edition of Eclipse”. So I gave up, replaced my Eclipse, and found some success.

Bonus: I found I could add the WindowBuilder modules to the Java EE edition, thereby giving me the best of both, Swing and web apps.

I am using:

Eclipse Java EE IDE for Web Developers.
Version: Indigo Release
Build id: 20110615-0604
with Mac OS X 10.6.7 on an Intel MacBook Core 2 Duo, 6 gigs of memory, executing in Tomcat 7.0.21 under Apple's implementation of 64-bit Java 1.6.0_26.

Tip 2: Use the Vaadin plugin for Eclipse

While not technically necessary, the plugin is virtually a necessity. The plugin creates a properly configured Vaadin web app project with a Hello World application. The plugin provides other features as well.

Tip 3: Let Eclipse download and install the Vaadin plugin

I naïvely downloaded and installed the plugin myself. That led to problems. Among other things, Eclipse offered to upgrade the plugin later, but could not actually do so. To install the plugin the proper way, let Eclipse do it.

  1. Choose Help > Install New Software. 
  2. In the Work with field, paste the URL as instructed by the Vaadin site.
Tip 4: Install Tomcat before using Vaadin

Download, prepare, and configure Apache Tomcat before you start using Vaadin.

Download: Unzip the Tomcat download. You may put the Tomcat folder anywhere. I put it at the top of my home folder.

Prepare: See another post of mine explaining how to perform the crucial step of running your unzipped Tomcat through the BatChmod app if you are running on a Mac.

Configure: You must tell Eclipse how to work with your Tomcat installation. As I’ve forgotten how at this moment, I'll try to write another post on this topic.

Tip 5: Create your first project as a Vaadin project using the Wizard

In Eclipse, choose File > New > Other. In the Wizards filter field, type: Vaadin. From the list, pick "Vaadin Project". For the most part, go with the default values in the wizard. If you are doing real work, as opposed to the tutorial, change the base package to your own.

Tip 6: Build before running

The resulting project should be ready to run. Except that for some mysterious reason, the wizard's Run configuration fails to first build the project. So to run:
  1. Select the name of the project in the Project Explorer. (Crucial!)
  2. Choose Project > Clean Project (Crucial! Successive runs do not freshen properly unless you clean -- don't ask me why)
  3. Click the Run button in toolbar.
If you are lucky, Tomcat launches, and after several moments your app appears in Eclipse's built-in web browser. Eclipse lets you specify using a real web browser instead. But when I tried Firefox, I got annoyed my Eclipse telling me to close all windows in Firefox or quit Firefox. Hopefully I can solve that problem another day.

Tip 7: Stop Session Persistence

After my first successful run with the Hello World app, I did what any eager student does. I tweaked the app, and ran again. I added a "new java.util.Date" to the app's Hello label. I ran my app again, but no date. I stopped the Tomcat server, and tried to run again. No date. I quit Eclipse and restarted and ran. No date. Arghh!

Turns out to be a feature, not a bug. Unbeknownst to me, by version 7, Tomcat had acquired the default behavior of persisting sessions automatically. So when Tomcat shuts down, any existing Session objects have their values written to disk. When that web runs again after restarting Tomcat, the sessions are reconstituted in the JVM

Meanwhile, the Hello World app created by the Vaadin project wizard has its gui constructed during a call to the Vaadin 'init' method. Turns out, the init method is only called when starting a new session. Since Tomcat was reconstituting the old session on every run, the init method was never being called again. So my edits to the init method never took effect. This tricky Catch-22 cost me hours and hours of frustration and confusion. 

To avoid this behavior:
  1. As mentioned in Tip 6 above, always Clean rather than build your project.
  2. Turn off the Tomcat feature of persisting sessions.
To disable that Tomcat feature, install a configuration file.
  1. Locate the "context.xml" file found in the "conf" folder of Tomcat. In my case: /Users/basilbourque/apache-tomcat-7.0.21/conf/context.xml
  2. Paste a copy of that file to the "WEB-INF" folder of your Eclipse project's "WebContent" folder. In my case: /Users/basilbourque/Documents/workspace/myproj/WebContent/WEB-INF/context.xml
  3. Edit that new "context.xml" by uncommenting the line reading: <Manager pathname="" />. Note that the comment itself explains this will disable the session persistence feature.


Starting and Stopping Jetty Web Server

I’ve looked at Jetty a few times in the past but could not even find documentation on how to start or stop the dang thing. Trying to use Vaadin has brought me once again to Jetty, version 8.

I don't know if this is necessary, but after unzipping the downloaded Jetty file, I drag and drop the resulting folder to the BatChmod app. I check all the checkboxes except 'Clear xattrs'.

I finally discovered how to start and stop Jetty on Mac OS X. It's easy. While Apache Tomcat has an explicit start and stop shell script (.sh files), Jetty uses a single shell script for both purposes. You pass the word 'start' or 'stop' without any quotes.

To start Jetty web server:
  1. Locate the shell script: …/YourJettyFolder/bin/jetty.sh
  2. Drag and drop that script file to a terminal window in either Terminal.app or Path Finder.
  3. Type a space and the word 'start'. Press Return key.
/Users/basilbourque/jetty-distribution-8.0.0.v20110901/bin/jetty.sh start

You’ll see something like this:
Starting Jetty: STARTED Jetty Mon Sep  5 16:39:26 PDT 2011

To stop Jetty web server:
  1. Press the Up Arrow key to repeat the previous line at the command line.
  2. Press Backspace to replace the word 'start' with the word 'stop'.
  3. Press Return to execute that line.
You’ll see something like this:
Stopping Jetty: OK

To verify Jetty is running and verify its configuration, pass the word check rather than start or stop.

Jetty defaults to port 8080. So to try it, point your web browser to:
To use port 80, see my other posts on port-forwarding.

I’m working on an Intel MacBook with Core 2 Duo running Mac OS X 10.6.7 with Java version "1.6.0_26" 64-bit.


Adding libraries to Eclipse

Being new to Eclipse, I could not figure out how to add libraries to my project. By libraries, I meant already-compiled jars, such as JodaTime, SLF4J, Postgres JDBC driver, etc. And I want to provide Eclipse with access to those utilities’ source code and JavaDoc if available.

I struggled until I read this article.

Other IDEs have you define the executable jar, its source code jar or folder, and its JavaDoc jar or folder simultaneously as all equal parts of one named library. The eclipse way is different. Instead of 3 parts to one library, Eclipse establishes a hierarchy of the library (just a name), one or more executable jars, and then attaching the source code and JavaDoc as attributes on each executable jar.

So, the steps to establish a library:
  1. Establish the name of the library.
    On a Mac, that would be Eclipse > Preferences > Java > Build Path > User Libraries > New.
  2. Add a jar.
    This would be the executable jar of the utility.
    For example, with JodaTime: joda-time-1.6.2.jar
  3. Notice the 4 sub-items that appear below the added jar:
    - Source attachment:
    - Javadoc location:
    - Native library location:
    - Access rules:
  4. Select ‘Source attachment:’ and click Edit.
    For example, with JodaTime: joda-time-1.6.2-sources.jar
  5. Do similarly if you have Javadoc available.
    For example, with JodaTime: joda-time-1.6.2-javadoc.jar
  6. Click OK to close the prefs.
Now to use those established libraries in your project:
  1. Context-click your project in the Package Explorer.
  2. Choose Build Path > Add Libraries
  3. Choose ‘User Library’ and click ‘Next’.
  4. Check the libraries that you want to add.
    The libraries appear as items inside your project in the Package Explorer.


Letting any Mac use a Remote Disc

Remote Disc is a Mac OS X feature where one Mac uses another Mac's CD/DVD drive as if it were its own, while connected over the network.

Any Mac can choose to share its DVD drive, via System Preferences > Sharing > DVD or CD Sharing. By default, only MacBook Air computers can be a client of Apple's Remote Disc feature.

But when a Mac has a faulty DVD drive, this feature would be handy as a workaround until the drive can be repaired. So can we get a non-Air Mac to be a Remote Disc client? Yes. This blog post explains that writing two settings via the command line in the Terminal app tells the Finder to look for Remote Disc servers on the network.

defaults write com.apple.NetworkBrowser EnableODiskBrowsing -bool true

defaults write com.apple.NetworkBrowser ODSSupported -bool true

Then restart your Mac. "Remote Disc" should appear in the sidebar of a Finder window.


Work Music

soma fm → Streams. Ambient streams. Keeps the hands typing and the brain coding.

In AAC format, as well as the older MP3. iTunes-friendly links.

One stream in particular works for me: "Mission Control" where ambient music is ingeniously overlaid with the chatter patter of real NASA mission control personnel and astronauts -- sometimes live, sometimes recorded.


Location of Postgres Logs on a Mac

For Postgres 9.0.4 on Mac OS X 10.6.7, I found logs here:


Accessing Postgres On The Command-Line on Mac OS X

I simply wanted to restart the Postgres server. I'm trying to enable SSL/TLS, and such settings are checked only when Postgres starts. Postgres is set to auto-start when the Mac boots, if you install using EnterpriseDB's free Mac installer.  But I want to restart Postgres manually, without rebooting the Mac.

You'd think restarting Postgres would be easy, but no. The pgAdmin program is not built to stop or start the Postgres server, at least not that I could find. The command line tool "pg_ctl" can check the status of a server as well as start and stop. That command even has a handy "restart" option that effectively does both the stop and start. But I cannot run that pg_ctl from my own command-line in the Terminal app. The command must be run as the 'postgres' unix user created by the Postgres installer.

So, the answer is simple, right? Run the command as "sudo", the superuser? That usually is the answer with Mac command-line limitations. But, no, not in this case. The command must be run as another user, not just with enhanced privileges. So a bit of googling might lead you to use the "su" command to act as another user, like this:
  su postgres
This fails though I don't know understand why. After much googling, I finally found the trick:
  sudo su - postgres
It takes the combination of "sudo" and "su". Also notice the hyphen with spaces.

Now I can (I mean 'postgres user can) run Postgres utility programs. The next problem is that those utilities are not found automatically from the command line. So instead of just typing "pg_ctl" in the Terminal, type the entire path to the program, such as:

Tip: Rather than type, just find the programs in the Finder (or the very handy Path Finder app), then drag-and-drop to the Terminal app.

Yet another problem to solve. Some of those commands need to know where the Postgres "data" folder is. Again, you can drag and drop the path, or use Path Finder's Copy Path > Unix menu item.

So for example, to check the status of the Postgres server, run these two lines in the Terminal. Run them separately, one after the other.
  sudo su - postgres
  /Library/PostgreSQL/9.0/bin/pg_ctl status -D /Library/PostgreSQL/9.0/data

To avoid including the full paths, you can tell your Mac to include those folders when looking for programs. But that’s a topic for another day.

I'm using Postgres 9.0.4 on Mac OS X 10.6.7.


I Wish Postgres Had…

Here is my wish list for Postgres features.

  • Feedback system
    I have yet to learn how to submit bugs & feature requests to the Postgres developers. So if anyone knows how I should relay this list, please post a comment to tell me.
  • Multiply String function
    It can be surprisingly handy to have a command that takes a string, repeats that string a certain number of times, and returns the result. One usage is padding a string with spaces, underscores, or periods, as seen in a book's Table Of Contents. The command could be added to this collection, and might look like this:
    Multiply( string, numberOfTimes ) --> String


Postgres Server Settings

Postgres has hundreds of settings in its configuration. Nearly all of them have defaults appropriate to most of us most of the time. Here's a few you may want to consider changing.

These settings are found in the pgAdmin app menu Tools > Server Configuration > postgresql.conf.

• superuser_reserved_connections

Sets aside a few of maximum number of connections for use only by the Postgres superuser, usually named 'postgres'. Without this setting, if your server reaches its limit, you will not be able to logon as an administrator. That's not good, so turn this on and use at least the default of 3 connection slots.


Tips On Creating Login Forms

This article in Smashing Magazine has a bunch of tips and techniques to consider when building Login forms and Account Creation forms. For example, for web forms they gives alternatives to those annoying Captchas such as the Honeypot-Captcha and timestamp analysis.


Uninstalling Postgres

Dang! I lost the password to one of my Postgres installations. Fortunately, it had not been used so no great loss. But how to uninstall both the Postgres software, and the 'postgres' unix user on Mac OS X?

I found this posting telling me how to uninstall. I've written my own explanation below.

I did this with Postgres 9.0.4 with the Mac installer provided by EnterpriseDB, on Mac OS X 10.6.7 Snow Leopard.

The steps are:
  1. Run the uninstaller app.
  2. Delete the enclosing folder, and provide Mac system password to authorize deleting the 'data' folder.
  3. Delete the 'postgres' Unix user.

Run the uninstaller app installed as part of the Postgres package. However the uninstaller is not located in the Applications > PostgeSQL folder. Instead, look in the root folder (your hard disk) > Library > PostgreSQL > 9.0 > uinstall-postgresql.app. Running that app will remove all of Postgres except for two pieces:

• The folder enclosing the uninstaller, and the all-important "data" folder containing any databases you created.

The "data" folder is special because it belongs to the "postgres" Unix user created on your Mac during installation. So the admin Mac user in which you are using the Finder does not have direct access to that folder. Fortunately the regular admin Mac user can move that folder to the trash though you will be prompted for your Mac password.

• 'postgres' Unix user

Killing that Unix user, and its password, requires the use of Terminal.app found in the "Utilities" folder of the "Applications" folder.

Type the following into the Terminal:
    sudo dscl . delete /users/postgres

The 'sudo' means your are invoking special privileges as a system administrator. So you’ll be prompted to enter your Mac system password. The 'dscl' command works on Directory Service.

With that you are done. You may now reinstall Postgres. Only this time, write down your password in a safe place immediately, as I learned the hard way.

If for some reason you need/want to use the command line to unstall Postgres rather than the uninstaller app, see that posting link above.


Automatically Record History of Field Changes in Postgres (Dynamic Triggers in PL/pgSQL)

Hoorah! I was able to complete my attempt at writing a single PL/pgSQL function in Postgres 9.0.4 to create history records tracking individual field value changes generically for all my tables. Some developers call this an "audit trail", though an accountant might say otherwise.

Special thanks to John DeSoi on the General Postgres mailing list for pointing me to a crucial code example to make PL/pgSQL interpret: 
   "OLD." || myColumnNameVar
   OLD.first_name    (for example)

The crucial line is:
   EXECUTE 'SELECT ($1).' || ri.column_name || '::text' INTO t USING NEW;

I'm new to SQL and Postgres, so I may be wrong but my interpretation of that line is:
Since there appears no way to make the PL/pgSQL interpreter interpret our desired string, the trick is to look outside of PL/pgSQL and instead use the SQL interpreter. I was incorrectly thinking of the PL/pgSQL interpreter as the same thing as the SQL interpreter, or rather as incorporating the SQL interpreter. Once I saw that line of example code, I realized the two interpreters are separate. The language, the syntax, of PL/pgSQL is a superset of SQL, but the interpreters are separate with one calling the other.

Brilliant! It works, and it seems to be fast enough, at least for my needs.

I also found this blog post on working around the impossibility of dynamic triggers in PL/pgSQL.

I'll share my current code & table structure below. Caveat: This code has not yet been thoroughly tested, nor has it been deployed. I only finalized it today.

[A] I'm working on a greenfield project, where:
  • I've built every table to have a primary key of type uuid named exactly "pkey_".
  • Every table has a TIMESTAMPTZ field named "record_modified_".

My approach below hard-codes these assumptions.

[B] I have this "history_" table:

 pkey_ uuid NOT NULL DEFAULT uuid_generate_v1mc(), -- The primary key for this table, though no primary key constraint was created (for the sake of performance and conservation). This column and timestamp_ column are the only two columns about this table itself. All other columns are about the inserted/modified/deleted record in some other table.
 table_name_ character varying(120) NOT NULL, -- Name of table whose row is being affected (inserted, deleted, or modified).
 column_name_ character varying(120) NOT NULL, -- Name of the column in some other table whose row value is being modified. This column's value is empty string if the operation was DELETE.
 timestamp_ timestamp with time zone NOT NULL DEFAULT clock_timestamp(), -- The moment this record was created. Using the clock_timestamp() function as a default, to capture the actual moment in time rather than moment when transaction began.
 db_user_name_ character varying(120) NOT NULL DEFAULT "current_user"(), -- The name of the Postgres user logged in to this database connection/session.
 app_name_ character varying(120) NOT NULL DEFAULT current_setting('application_name'::text), -- The name of the application connected to the database. May also include the version number of app, and the name of the human user authenticated within the app.
 old_value_ character varying(120) NOT NULL DEFAULT ''::character varying,
 new_value_ character varying(120) NOT NULL DEFAULT ''::character varying,
 uuid_ uuid NOT NULL, -- The UUID of the row being affected, the row being inserted, updated, or deleted. Assumes every table whose history is being recorded uses the 'uuid' data type as its primary key.
 operation_ character varying(120) NOT NULL, -- What database operation resulted in this trigger running: INSERT, UPDATE, DELETE, or TRUNCATE.
 table_oid_ oid NOT NULL, -- The oid of the table whose record is being modified. May be helpful if a table name changes over time.
 ordinal_position_of_column_ integer NOT NULL, -- The position of the affected column in its table. Every new column gets a number, incremented by one for each. This may be helpful when analyzing changes across a stretch of time during which a column's name was changed. Apparently columns have no oid, so we are recording this number instead.
 transaction_began_ timestamp with time zone NOT NULL DEFAULT transaction_timestamp() -- The time when the current transaction began. Can act like a transaction identifier, to group multiple "history_" rows of the same transaction together. This is not foolproof, as multiple transaction could possibly start in the same split second moment. Assuming the computer's clock has a fine resolution, this chance of a coincidence should be quite miniscule.

I do not have a primary constraint for this table. The "pkey_" column acts as a primary key, but there is no need for an index or uniqueness testing for this special table. I suppose I could even drop this column entirely, since this kind of logging table will never have related records nor do we ever care about identifying a particular row. I'll leave it for now, just for the heck of it.

[C] For every table I want to track field-level value changes, I create a trigger like this:

CREATE TRIGGER XXX_trigger_history_

where 'XXX' is the name of the particular table.

[D] I created this function:

LANGUAGE plpgsql
/*     Purpose: Make a history of changes to most fields in the table calling this trigger function.
       This kind of history tracking is also known as an "audit trail".
       This function works by detecting each change in value for important fields in a certain table.
       This trigger function then calls another function to create a row in the "history_" table.
    This kind of feature is often called an "audit trail" by software developers. I avoid using that term in this context as a real
    audit trail in accounting terms involves more than this simple field change tracking.
/*    © 2011 Basil Bourque. This source code may be used freely forever by anyone taking full responsibility for doing so, without warranty.
    Thanks so very much to John DeSoi of the pgsql-general@postgresql.org mailing list for pointing me to this crucial code example:
    Before reading that example, my previous efforts led me to conclude a generic history facility written in PL/pgSQL was impossible.
/*     We make these assumptions about any table using this function in its trigger:
           • Has a primary key named "pkey_" of type uuid.
           • Has a field tracking the datetime the record was last modified, named "record_modified_" of type timestamptz.
           • Is in the default/current schema.
       While it might be nice to rewrite this function to escape these assumptions, I've spent all my energies to get this far.
    I welcome feedback from anyone who want to take this further.


For each table on which you want history, create a trigger by executing SQL like this:
CREATE TRIGGER XXX_trigger_history_

where XXX is the name of the specific table.

/*     Notes:
    The 'OLD' and 'NEW' variables represent the entire row whose INSERT/UPDATE/DELETE caused this trigger to run.
       The 'TG_xxx' variables are special variables created automatically by Postgres for the trigger function.
       For example, TG_OP indicates which modification operation is happening: INSERT, UPDATE, or DELETE.
       "clock_timestamp()" gets the actual time at the moment of execution. In contrast, most other timestamp
       functions return the time when the current transaction began.
    For more information, see: http://www.postgresql.org/docs/current/static/functions-datetime.html
    The "history_" table also includes a column "transaction_began_" defaulting to "transaction_timestamp()". This timestamp can act
    like a transaction identifier, to group multiple "history_" rows of the same transaction together. This is not foolproof, as
    multiple transaction could possibly start in the same split second moment. Assuming the computer's clock has a fine resolution,
    this chance of a coincidence should be quite miniscule. If someone knows a way to get a true transaction id, please share.

/*  History:
    2011-04-31    • Published on the general Postgres mailing list.
    2011-05-01    • Revised to not ignore the ".record_created_" field.
                  • Published on my blog at http://crafted-software.blogspot.com/.

    ri RECORD; -- About this data type "RECORD": http://www.postgresql.org/docs/current/static/plpgsql-declarations.html#PLPGSQL-DECLARATION-RECORDS
    oldValue TEXT;
    newValue TEXT;
    isColumnSignificant BOOLEAN;
    isValueModified BOOLEAN;
    /*RAISE NOTICE E'\n    Running function: make_history_ ----------------\n\n    Operation: %\n    Schema: %\n    Table: %\n',

        NEW.record_modified_ = clock_timestamp(); -- Record the moment this row is being saved.
        FOR ri IN
            -- Fetch a ResultSet listing columns defined for this trigger's table.
            SELECT ordinal_position, column_name, data_type
            FROM information_schema.columns
                table_schema = quote_ident(TG_TABLE_SCHEMA)
            AND table_name = quote_ident(TG_TABLE_NAME)
            ORDER BY ordinal_position
            -- For each column in this trigger's table, copy the OLD & NEW values into respective variables.
            -- NEW value
            EXECUTE 'SELECT ($1).' || ri.column_name || '::text' INTO newValue USING NEW;
            -- OLD value
            IF (TG_OP = 'INSERT') THEN   -- If operation is an INSERT, we have no OLD value, so use an empty string.
                oldValue := ''::varchar;
            ELSE   -- Else operation is an UPDATE, so capture the OLD value.
                EXECUTE 'SELECT ($1).' || ri.column_name || '::text' INTO oldValue USING OLD;
            END IF;
            -- Make noise for debugging.
            /*RAISE NOTICE E'\n    Column #: %\n    Name: %\n    Type: %\n    Old: %\n    New: %\n',
            --    ToDo: Add code to throw an Exception if the primary key value is changing (other than from NULL on an INSERT).
            --     ToDo: Add code to ignore columns whose data type does not cast well to TEXT/VARCHAR.
            --    Ignore some columns:
            --         • Those whose names are marked with a trailing x.
            --        • The primary key.
            --         • Our timestamp field recording the row's  most recent modification.
            isColumnSignificant := (position( '_x_' in ri.column_name ) < 1) AND (ri.column_name <> 'pkey_') AND (ri.column_name <> 'record_modified_');
            IF isColumnSignificant THEN
                isValueModified := oldValue <> newValue;  -- If this nthField in the table was modified, make history.
                IF isValueModified THEN
                    /*RAISE NOTICE E'Inserting history_ row for INSERT or UPDATE.\n';*/
                    INSERT INTO history_ ( operation_, table_oid_, table_name_, uuid_, column_name_, ordinal_position_of_column_, old_value_, new_value_ )
                    VALUES ( TG_OP, TG_RELID, TG_TABLE_NAME, NEW.pkey_, ri.column_name::VARCHAR, ri.ordinal_position, oldValue::VARCHAR, newValue::VARCHAR );
                END IF;
            END IF;
        END LOOP;
        RETURN NEW;
        /*RAISE NOTICE E'Inserting history_ row for DELETE.\n';*/
        --    Similar to INSERT above, but refers to OLD instead of NEW, and passes empty values for last 4 fields.
        INSERT INTO history_ ( operation_, table_oid_, table_name_, uuid_, column_name_, ordinal_position_of_column_, old_value_, new_value_ )
        RETURN OLD;
    END IF;
     /* Should never reach this point. Branching in code above should always reach a call to RETURN. */
    RAISE EXCEPTION 'Unexpectedly reached the bottom of this function without calling RETURN.';


[end of code]

The beauty of this is that a history record is created for each field edit regardless of the avenue by which the data was changed. If my desktop app modifies data, a history row is created. If my web app does so, a history row is created. Ditto if I use interactive SQL to fix some data, or use pgAdmin to affect data. In all cases, history is made.


There's More to Flash Than SSDs

For those of you considering using SSDs on your servers, you may want to also consider the ioDrive products from Fusion-io.

This company makes enterprise-class storage on a board populated with flash and managed by their own supposedly-sophisticated drivers. The board + drivers are meant to get around the problems of SSDs, such as write-failures. They make both SLC and MLC products, at different sizes, to meet different budgets and purposes.

I tried them about 3 years ago on a database server (not Postgres). The real-world speed was excellent, yet disappointing in that it performed as well as our high-end RAID-10 from HP. We were looking for even more speed, and were promised that, but did not see it in our trials.
• This was using their first attempt at Windows drivers.
• We may not have tuned the settings properly.
In the end, we chose to keep our RAID for the time being.

So, while I can't specifically recommend their products, I certainly suggest considering them. They have drivers for Linux and Windows, but apparently their rumored Mac OS X drivers never came to fruition.

Other benefits beyond speed include size, power, and heat. An internal board with flash saves on all three, compared to a RAID made of either discs or SSDs.

As an aside, this company is also known for having put The Woz to work as their "Chief Scientist". This may be the closest he's come to a regular job since leaving Apple.


Query for Listing of Columns in a Table in Postgres 9

Here's my code for getting a list of a table's columns' name and data type.

One important trick here is skipping columns which have been dropped. The definition of a dropped column remains, so the query needs to filter that out with "attisdropped = False".

SELECT attname::varchar as "Column Name", pg_type.typname::varchar as "Data Type", pg_attribute.attnum as "Position in Table"
FROM pg_attribute, pg_class, pg_type
WHERE attrelid = pg_class.oid
AND pg_attribute.attisdropped = False
AND relname = 'YourTableNameGoesHere'
AND attnum > 0
AND atttypid = pg_type.oid

That code took a lot of googling and trial and error. Hope it helps save you same time and trouble.

Update as of 2011-04-29

Here's another approach I found here:

SELECT ordinal_position, column_name, data_type
FROM information_schema.columns
WHERE table_schema = 'YourSchemaNameGoesHere' -- Default is usually 'public'.
AND table_name = 'YourTableNameGoesHere'
ORDER BY ordinal_position;

"information_schema" is a set of meta-data defined by the SQL standard. So the code at the top of this post is Postgres-specific while the last block of code should work in any database complying with the standard.

By the way, if you see gaps in the sequence of the ordinal position numbers, that means columns were deleted from your table. Example: 1, 2, 3, 5, 6 means the 4th added column was deleted from this table's definition.


Passing a UUID Value to a Function in PL/pgSQL

When writing a PL/pgSQL function in Postgres 9.0.x, I ran into a Catch-22 with UUIDs. I tried to write this function to log changes to rows in any table in this "history_" table:

CREATE OR REPLACE FUNCTION make_history_( varchar, varchar, varchar, varchar, varchar, varchar, text ) RETURNS boolean AS $$

  INSERT INTO history_ ( app_user_name_, table_name_, column_name_, affected_uuid_, old_value_, new_value_, notes_, triggered_ )
  VALUES ( $1, $2, $3, $4, $5, $6, $7, True );
  RETURN True;

LANGUAGE plpgsql;

The problem is that our programming uses long hex strings as a representation of the actual 128-bit value that is a UUID. We have no way to actually represent the 128 bits in our source code. So Postgres is generally very kind about accepting a hex string in place of an actual 128-bit value. If we pass Postgres a hex string, Postgres transforms the string into an actual 128-bit value, and then stores those 128 bits efficiently in the database. For SELECT statements, vice-versa, generating a hex string sent as text to the client. This works so well in Postgres that we programmers begin to think of the hex string as the actual UUID – but it is not.

Where this UUID<-->HexString translation broke down for me was passing a hex string into the function above. Notice the 4th argument. The hex string is passed in as varchar, but then we turn around and save it to a table where the column "affected_uuid" is actually of type "uuid". The PL/pgSQL interpreter is a little overeager, and reports a data type mismatch error. It notices that we are passing a varchar, but saving to a column of a different type. I suppose this is a bug in PL/pgSQL, since we can do this with plain SQL in Postgres.

Whether bug or feature, I found a workaround. Declare a variable in your PL/pgSQL code, make it of type "uuid". Assign the argument's value to that variable. Lastly, use that variable in the INSERT command rather than the $x argument itself.

CREATE OR REPLACE FUNCTION make_history_( varchar, varchar, varchar, varchar, varchar, varchar, text ) RETURNS boolean AS $$
  uuid_arg uuid;
  uuid_arg := $4;
  INSERT INTO history_ ( app_user_name_, table_name_, column_name_, affected_uuid_, old_value_, new_value_, notes_, triggered_ )
  VALUES ( $1, $2, $3, uuid_arg, $5, $6, $7, True );
  RETURN True;

LANGUAGE plpgsql;

Folks on the Postgres General mailing list taught me another workaround -- casting. The standard CAST command or Postgres' own double colon syntax can be used.
CAST( $4 AS uuid )

So we can write that VALUES line this way:
  VALUES ( $1, $2, $3, CAST( $4 AS uuid ), $5, $6, $7, True );


Communicating Between Apps

Folks on the REAL Studio forums and mailing list have been asking about how to get their REALbasic apps to talk to other apps such as Perl or PHP server-side apps. There is no magic answer. REAL Studio produces apps like any other language does, and has a good TCP/IP socket class. So your RS apps can reach out and touch other apps in the same way as other languages. Here's a few of those ways.


Open a TCP socket directly to one another. REAL Studio has good classes to make a socket connection, either for listening (server) or initiating (client). Nearly every language has such libraries either built-in or available.

You then pass any series of octets you want. This is totally flexible, but you have the burden of figuring out your own protocols for packaging and shipping data back and forth. This process of exchanging data is known as marshaling or serialization.

This approach works even if both apps are running on the same machine. Two local apps can make a socket connection even without a network. Simply use the internet/host address "", or domain name "localhost", both of which means "this computer".


Either app can act as a web server (an HTTP server), returning plain text rather than HTML in response to certain URLs requested by the other app. Super simple & easy approach for getting single or few pieces of data.

Instead of a web browser making the HTTP request, the app uses a socket or HTTP-client library to make the HTTP request.

Two good books on HTTP include these from O'Reilly.
HTTP Pocket Reference
HTTP: The Definitive Guide

Remember: HTTP <> HTML
HTTP is the usual way we deliver HTML, but the two are not bound together. You can make requests and responses over HTTP that have nothing to do with HTML and web pages. Likewise, you can deliver HTML content without HTTP, such as "rich text" email messages.

…returns a single line of plain text rather than a bunch of HTML making up a web page:

Web Service

Either app may be able to act as a Web Service server, while the other makes a SOAP, REST, or XML-RPC call. XML-RPC is the simpler and saner predecessor to SOAP, as described well in this this book.

Or both apps can read and write data via a 3rd server's services.

This idea is similar to the bullet above, "HTTP", but formalizes how to make requests with arguments, and how to represent data.


One app writes a file in a location to be read by the other app.


Either app reads or writes data to a database server such as Postgres.

You can even handle requests and responses this way. Create a table of 'requests' or 'to_do_items', where either app creates a row in that table to ask for work to be done, and the other app is in a loop scanning for such rows being inserted.

Message Queue

Both apps can use a 3rd party message queue service.

You can even use email for this, where each app gets their own email account on your email system so they can send and receive messages on their own.


Get All Column Names of Table In Postgres 9

If you need to get a list of the column names in a table in Postgres, here's a line of REALbasic code with the SQL SELECT statement:

sql = "SELECT attname, attnum FROM pg_attribute, pg_class WHERE attrelid = pg_class.oid AND relname = " + DBUtils.escapeAndAddQuotes(tableName) + " AND attnum > 0 ;"

You can see that a join is required across two tables of Postgres' meta-data. I've not studied Postgres' meta-data tables, but worked out this code with some googling and experimenting.

The call to DBUtils.escapeAndAddQuotes is unrelated. That is my own method for 3 things:
  • Verify the string is in UTF-8 encoding.
  • Replace any occurrence of a single quotation mark (Unicode APOSTROPHE-QUOTE) with two such characters, to make the string safe for SQL execution.
  • Add single quote marks around the string for use as a literal in the SQL execution.

An alternative approach suggested by others is this:
"select * from tablename where 0=1"
Obviously that is a hack, though it may be an efficient and effective one. Instead I chose the other approach.


Example code to drop & create table, and create record in REALbasic.

Here's a snippet of REALbasic code when you want to create a bogus table to experiment.

I wrote this in REAL Studio 2011 Release 1, but it should work in older versions too.

// Purpose: Create a table in Postgres, insert a row, and retrieve that row.
// Shows a bug in the Postgres database driver of REAL Studio 2011 Release 1
// where dates ('timestamp with time zone' at least) are nil in the RecordSet.

// © 2011 Basil Bourque. This source code may be used freely by anyone taking full responsibility for doing so.

dim db as PostgreSQLDatabase = new PostgreSQLDatabase
dim sql as String
dim rs as RecordSet

// ---------------- Prepare connection.
db.Host = ""
db.port = 5432 // 5432 is default port for PostgreSQL.
//db.Encryption = 128
db.UserName = "postgres" // "postgres" is the default admin user name.
db.Password = "YourPasswordGoesHere"
db.DatabaseName = "YourDatabaseNameGoesHere" // Postgres supports multiple databases. Which one do you want? Default is named the same name as user, such as 'postgres'.

// ---------------- Try connecting.
dim status as String
if( db.Connect ) then
status = "Success connecting to db server"
else // Error connecting
status = "Error occurred when connecting to database server. Code: " + str(db.ErrorCode) + " " + db.ErrorMessage
end if

// ---------------- Create a table named "bogus_spacetime_".
sql = sql + "DROP TABLE IF EXISTS bogus_spacetime_ ;" + EndOfLine
db.SQLExecute( sql)
if(db.Error) then
status = "Error when running SQLExecute. Code: " + str(db.ErrorCode) + " " + db.ErrorMessage
end if

// Assumes you have enabled the UUID generation feature in PostgreSQL.
// If not, kill or edit 2 lines below ('pkey_' and 'CONSTRAINT').
sql = "CREATE TABLE bogus_spacetime_" + EndOfLine
sql = sql + "(" + EndOfLine
sql = sql + " pkey_ uuid NOT NULL DEFAULT uuid_generate_v1mc()," + EndOfLine
sql = sql + " moment_ timestamp with time zone NOT NULL DEFAULT clock_timestamp()," + EndOfLine
sql = sql + " CONSTRAINT bogus_spacetime_primary_key_ PRIMARY KEY (pkey_)" + EndOfLine
sql = sql + ")" + EndOfLine
sql = sql + "WITH (" + EndOfLine
sql = sql + " OIDS=FALSE" + EndOfLine
sql = sql + ");" + EndOfLine
db.SQLExecute( sql)
if(db.Error) then
status = "Error when running SQLExecute. Code: " + str(db.ErrorCode) + " " + db.ErrorMessage
end if

sql = "ALTER TABLE bogus_spacetime_ OWNER TO postgres;"
db.SQLExecute( sql)
if(db.Error) then
status = "Error when running SQLExecute. Code: " + str(db.ErrorCode) + " " + db.ErrorMessage
end if

// ---------------- Create a record
sql = "INSERT INTO bogus_spacetime_ DEFAULT VALUES;"
db.SQLExecute( sql)
if(db.Error) then
status = "Error when running SQLExecute. Code: " + str(db.ErrorCode) + " " + db.ErrorMessage
end if

// ---------------- Execute a "SELECT *"
sql = "SELECT * FROM bogus_spacetime_;"
rs = db.SQLSelect( sql)
if(db.Error) then
status = "Error when running a SELECT. Code: " + str(db.ErrorCode) + " " + db.ErrorMessage
else // Else normal.
if( rs = nil ) then
status = "ERROR - RecordSet is nil"
status = "Record count = " + str( rs.RecordCount ) // Replace with more useful code.
// Access the timestamp value from the 2nd column.
dim f as DatabaseField = rs.IdxField(2) // Despite name, "IdxField" is 1-based.
// BUG = The DatabaseField object contains a nil value rather than a date.
dim d as Date = f.DateValue // Crash.
MsgBox d.SQLDateTime
end if
end if


// ---------------- Catch any exception thrown by code above.
if( db <> nil) then
end if

// ---------------- End of code.


Subclassing a Window in REALbasic

In REAL Studio 2010, a window cannot inherit from another window. But a window can inherit from a class whose "Super" is set to "Window".

For example, if you want to track each individual window, you could add assign a UUID to each window instance. That UUID value could be stored as a hex string in a property on each window. Rather than create the property on each window definition (Customer window, Purchase Order window, etc.), let those window definitions inherit the UUID property.

(1) Add a class (not a Window) named "UuidWindow".

(2) On that class, set it's "Super" property to "Window".

(3) On that class, add a property named "UUID" of type String.

(4) Open each of your window definitions (Customer window, Purchase Order window, etc.) and change their "Super" property to "UuidWindow".

Now each of those windows can refer to "me.UUID".


The 2 Passwords for Postgres On Your Mac

There are 2 passwords involved with Postgres 9 on your Mac:

• The Postgres installer asks for your usual Mac admin account password, to get permission for 2 operations: (1) to install stuff on your computer and (2) to create a special Unix user account named (by default) 'postgres'.

The actual disk files that make up your databases, the files containing your data, are stored in a folder owned by that 'postgres' user rather than your normal Mac user account. The reason is security: If your usual user account is compromised, at least your data files remain behind the wall of that other user.

• The Postgres installer also prompts you to create a password for that 'postgres' user. 

Make it a good password (long, use digits etc. to avoid being simply dictionary words).

When you run the 'pgAdmin' app, it will prompt you for that 'postgres' password to connect to the database server as that 'postgres' user.

--Basil Bourque

The Back Door to Postgres

When you install Postgres 9 on your Mac, the installer automatically creates a user named "postgres" and prompts you to invent a password for that user. The goal is security, so that if your main user account is compromised, your database's contents may remain safely within folders controlled by that other 'postgres' user.

Normally we never need access to the actual files that make up the database. We have command line access via the 'psql' tool. We have GUI admin apps such as the bundled 'pgAdmin', and from countless 3rd-party vendors. We have the 'pgDump' and 'pgRestore' tools to backup and restore the database's contents. Because of all those tools, you may never need to actually pierce the veil to peer at the actual disk files that make up the database.

But just in case you ever do have such a need, or curiosity gets the better of you, you can use the Terminal.app this way:
sudo su postgres
Type that into Terminal, hit return, and use your regular Mac admin account password as prompted. The word 'postgres' is the name of the special user created by the Postgres installer. Be very careful – you will be working with all the "safeties" turned off, enabling you to take actions that are normally forbidden to protect you from yourself.

To leave that session, type: