Handy Little Items For Postgres

Time Zone Display

Postgres always stores a timestamp with no time zone. The data types TIMESTAMP WITHOUT TIME ZONE and TIMESTAMP WITH TIME ZONE are both misnomers. A better name would be WITH REGARD TO TIME ZONE. 

The "without time zone" command ignores any time zone or offset information included with data input. The date-time value is stored without any adjustment and without any recording of the specified time zone or offset. This is rarely useful, as explained by Postgres expert David E Wheeler

The "with time zone" means that any time zone or offset information included with data input is used to adjust the stored date-time to UTC. Any such time zone or offset info is discarded after the adjustment, not stored as the name suggests.

When displaying timestamps, Postgres applies the client's default time zone when generating a string value. What if you want to display the date-time values in UTC? 


What if you want to display in a specific time zone? Use a proper time zone name, usually a continent/city (or region). Avoid the 3 or 4 letter codes such as EST or IST as they are neither standardized nor unique.

    SET TIME ZONE 'America/Montreal';

This command affects the current session only. For example, type into a "Query" window in pgAdmin.

BYTEA Display

The BYTEA data type in Postgres is akin to a BLOB in other databases. I'm guessing the name is short for "byte array". This data type is a sort of anti-data-type. When specifying BYTEA, you are telling Postgres to not bother parsing or interpreting the data. You are saying "take these bytes as-is and save them to storage without looking at them".

Since Postgres has no idea what those bytes mean, it has no idea how to display them. In the old days, Postgres displayed them in a sequence of escapes. Nowadays you have the choice of displaying as hexadecimal. But how to specify which?

    SET bytea_output = "hex";
    SET bytea_output = "escape";

 This command affects the current session only.


What Is Whitespace?

In some Java work I needed to scrub some NO-BREAK SPACE characters from some import data. In looking for a command in Java to trim the leading and trailing whitespace, I fell down a Rabbit Hole.

Turns out that the Java String class offers a trim command. But that command has a strange definition of whitespace. Read this blog post by Mike Kaufman for details. The upshot: 'trim' only deletes characters numbered 32 (U+0020, SPACE) and lower.

Then I find an interesting spreadsheet, whitespace? what's that?, listing the various definitions of whitespace in Java and Unicode. There is a lot going on in the nothingness of whitespace!

CharMatcher – Google Guava

Eventually I found a modern, flexible, easy-to-use solution: `CharMatcher` in Google Guava. See their brief guide. By making clever use of Predicate syntax, they make it easy to mix and match various groups of whitespace, invisible, and control characters. You can trim from the front and/or back of a string, replace, and more.

Example usage:

someText = CharMatcher.WHITESPACE.trimFrom( someText );


Just use 'TEXT' type in Postgres

Sometimes it is the little things that trip you up, especially when learning a new tool. Like which data type to choose when creating a new database in a SQL system. But not so simple.

Here are two posts that make a strong case for just using TEXT in Postgres while avoiding VARCHAR and CHAR. I agree. Wish I'd seen this long ago.

And for more info, experimentation, and discussion, see the post CHAR(X) VS. VARCHAR(X) VS. VARCHAR VS. TEXT by depesz.

The upshot is:
  • Use the TEXT type for all your textual needs.
  • If constraining the maximum, minimum, and content is important then define a CONSTRAINT. (No need to depend on the max length feature of varchar.)
As counter argument, see IN DEFENSE OF VARCHAR(X) by Leo Hsu and Regina Obe.


Chores In TestFlightApp

The TestFlightApp.com people provide an amazing service with relatively good documentation. Unfortunately they fail to provide the most simple doc for the regular chores. I only perform these chores only on occasion, and always forget the required steps.

So here’s a blog post to help remind myself. This information is for the old site before Apple acquired this company. Apple has begun pushing this service explicitly through Apple, but I continue to use the old web site; not sure if the new Apple way is different.

Add A New User

Someone wants to join my team of testers. They have no account on TestFlightApp.com.
  1. I login to TestFlightApp.com.
  2. I click the "Dashboard" tab in their web site.
  3. I click the third and last big green button, "Invite People".
  4. I enter their email address, and send an invitation.
  5. Wait for tester to do their part. Email goes out within minutes. If unreceived, remind tester to check their junk filters & folders.
The prospective tester takes their turn.
  1. Tester picks up the iOS device on which they want to test.
  2. On that iOS device tester receives email from TestFlightApp.com, with subject line: "Basil Bourque has invited you to join…".
  3. Tester clicks big green "Accept" button.
    I'm not sure exactly what happens next. I think the button opens a web page to the TestFlightApp.com site which then tries to open a URL for their iOS app called "TestFlight". The user does not have such an app, so the App Store appears with an offer to purchase this app at no cost. 
  4. User proceeds with purchase of "TestFlight" app.
  5. App opens and prompts user to create a personal account on TestFlight, with the user providing their email and inventing a password.
  6. During this process, TestFlight logs this particular device's unique identifier with the TestFlightApp.com web site.
After the user has their device’ identifier logged, TestFlightApp.com sends me an email with Subject line: "So-And-So accepted your invitation". The message includes the device’s identifier. 

Register Device Identifiers

A later goal is building a fresh "Provisioning Profile". To that we must register with Apple a lilt of device identifiers to be authorized to run our app. Our new user's device identifier must be added to our list registered with Apple.

The email mentioned above contains that identifier.

If you lose that email providing the identifier, or you want to look up some other testers’ device identifiers, use the TestFlightApp.com website. Click the main tab for "People". Select the checkboxes of those testers with device(s) that Apple has not yet been made aware. Over in the upper-right of the TestFlightApp.com web page find the grey "Actions" pop-up menu. Choose the menu item "Export iOS devices". Your web browser then downloads a plain text file of the identifiers of the devices owned by those checkmarked testers.

At this point, I need to let Apple know this device(s) is going to be added to my list of authorized testing devices. Remember that Apple limits that list to 100, revised per annum.
  1. I log into http://developer.apple.com.
  2. Click the "Member Center" area of Apple's Developer site.
  3. In the "Developer Program Resources" group box, click the link labeled "Certificates, Identifiers & Profiles – Manage your certificates, App IDs, devices, and provisioning profiles".
    The next page appears with a group box "Certificates, Identifiers & Profiles".
  4. Click "Devices" link. In the side-bar, you should have "Devices" > "All" selected.
  5. I use the web browser's text search feature ("Edit" > "Find" in Safari) to see if the desired device identifier is in the list or not.
  6. Click the "+" button in upper-right to add the identifier. Copy-paste values from the email (or the plain text file as mentioned above). TIP: Include a description of the particular device as part of the tester's name to differentiate when they own multiple devices, such as "Jane Doe – iPad Air".
As a shortcut to manually entering each of several user name + device identifiers, you can import a plain text file such as the one you downloaded early in this page. The catch is that the entire file is rejected if it contains even a single device identifier already registered. I often find it easier to do one at a time.

Provisioning Profile

Now it is time to build that fresh "Provisioning Profile". This encrypted security document lists our registered devices identifiers Apple is granting permission to run my app without that app being delivered through the App Store. This Provisioning Profile will eventually be integrated into a future build of my app intended for distribution to testers.

To create a fresh Provisioning Profile, use that same "Certificates, Identifiers & Profiles" group box page at the Apple Developer site. 

  1. In the side bar, choose "iOS Apps" > "Provisioning Profiles" > "Distribution". 
  2. Select the "Distribution" > "Ad Hoc" radio button.
  3. Follow the Wizard-like steps.
  4. On the step for "Select devices.", click the checkbox labeled "Select All" and continue.
  5. On the step for "Name this profile and generate.", enter a name similar to "Ad Hoc test distrib 2014-11-01".
  6. On that same step, click the "Generate" button.
  7. On a following step, click the "Download" button to download the newly generated Provisioning Profile to your local Mac.
    That file will be named something like "Ad_Hoc_test_distrib_20141101.mobileprovision".
  8. Locate that local file on your Mac.
  9. Open your app’s project in Xcode.
  10. Double-click on the .mobileprovision file causing it to be opened by Xcode.
Now I rebuild the project to incorporate that fresh Provisioning Profile. I am using Xcode 5 (not 6).
  1. Hold down the OPTION key while choosing the menu item "Product" > "Clean Build Folder" and confirm.
  2. Set the "Active Scheme" pop-up in far upper-left corner to "iOS Device".
  3. Choose menu item "Product" > "Archive".
    The Organizer window appears.
  4. In the Organizer, click "Distribute" button.
  5. In Wizard-like window, choose radio button for "Save for Enterprise or Ad Hoc Distribution" (Ad Hoc is what I'm doing).
  6. IMPORTANT – In next step of the Wizard, "Choose a profile to sign with", change the "Provisioning Profile" pop-up menu to your fresh one rather the default old one you used previously. This pop-up is why we named the .mobileprovision file with a date and the words "Ad Hoc", to make selection easier here.
  7. Click "Export" button.
    Wait a couple minutes for a Save As dialog to appear suggesting a ".ipa" file name extension.
  8. In that Export save dialog, I create a new folder named with the date and purpose. Save into that staging folder.
Done with the build process. Now it is time to upload to TestFlight for distribution to our users.

Upload To TestFlight
  1. Login to TestFlightApp.com.
  2. Click the main tab for "Dashboard".
  3. Click the first large green button, "Upload a Build".
  4. In the page that appears, write your Release Notes, indicating a focus for your testers’ work.
  5. Drag-and-drop the .ipa file from your staging folder to the dashed-line box on the same web page, and click green "Upload" button.
  6. Wait patiently for your app to upload.


UUID Converter For Vaadin

Here is a class to convert a java.util.UUID object to String for use in Vaadin. Originally written by a team member at Vaadin, I modified their source code to output lowercase hexadecimal characters as required by the UUID spec.

I filed a feature request ticket with Vaadin to bundle such a class with Vaadin.

Source code…

import com.vaadin.data.util.converter.Converter;
import java.util.Locale;
import java.util.UUID;

 * Modified by Basil Bourque to ensure output of hex string is in lowercase as required by the UUID spec. 2014-08.
 * @author petter@vaadin.com
public class UUIDToStringConverter implements Converter<String , UUID>
    private static final String NULL_STRING = "(none)";

    public UUID convertToModel ( String value , Class<? extends UUID> targetType , Locale locale ) throws ConversionException
        try {
            return value == null || value.isEmpty() || value.equals( NULL_STRING ) ? null : UUID.fromString( value );
        } catch ( IllegalArgumentException ex ) {
            throw new ConversionException( ex );

    public String convertToPresentation ( UUID value , Class<? extends String> targetType , Locale locale ) throws ConversionException
        return value == null ? NULL_STRING : value.toString().toLowerCase(); // The UUID spec *requires* hex-string output to be lowercase. Must tolerate uppercase for input.

    public Class<UUID> getModelType ()
        return UUID.class;

    public Class<String> getPresentationType ()
        return String.class;



Installing Postgres 9.4

Here's a reminder checklist of the steps I take when installing successive beta versions  (1, 2, and 3, so far) of Postgres 9.4 on my Macs.


The Postgres support company EnterpriseDB graciously supplies installers for Mac OS X as a courtesy to the community. You may reach their site via the Download page of the usual Postgres site. For beta versions look for the paragraph labeled "Beta/RC Releases and development snapshots (unstable)", and find the link to click. On the next page, look for the link near the text "offsite link". Currently takes you to the Early Experience page of EnterpriseDB. Click a link to download a DMG file named something like "postgresql-9.4.0-beta3-osx.dmg".


The Postgres superuser "postgres" is already created by the installer.

Next we need to create a not-quite-so-super user. This is done in "Login Roles" in pgAdmin, not the "Group Roles" list. Choose a name and password for this user, and write it down. The create this user in pgAdmin using a dialog or run the following SQL. After creation, refresh pgAdmin, and context-click the user to choose "Properties" where you can define a password  on the "Definition" tab.

CREATE ROLE your_admin_user_name_goes_here LOGIN

Next, create a group role to be used by your application. Let's imagine your app is named "Example" in general and "example_" in Postgres. We move attention from "Login Roles" to the "Group Roles" list in pgAdmin. Again, do this in the pgAdmin wizard-like dialog or use this SQL.

CREATE ROLE example_app_role_

As done above for the admin user, context-click the user to choose "Properties" where you can define a password  on the "Definition" tab. And, of course, you are writing down these passwords.

Now create a user to be assigned to that role. Again, use either dialog or this SQL:

CREATE ROLE example_app_ LOGIN
GRANT example_app_role_ TO example_app_;
COMMENT ON ROLE example_app_ IS 'For connections from our Example app.';


As I am installing a succession of beta versions, I already have a backup of my desired database. I used pgAdmin's "Backup" feature to create a .backup file. That chore is described in my previous blog entry.

That backup contains just about every aspect of my database, except one: Users & Passwords. That is why we defined our admin user, app role, and app user in those steps above. Those users and roles must be in place for the restore feature to work. The restore references ownership of various objects by those users/roles.

So now I want to restore that database to my new Postgres. No go. The restore process cannot create the database. You must create the database manually, such as in pgAdmin. Use the same name, but need take no further steps.

Context-click on the new database to choose the Restore feature. In the dialog, choose the character encoding to match the original. Not sure of this is required, but probably. Use the button to choose the .backup file to be imported. Finally click the main button to execute the restore. The restoration may take a while. Eventually look to the "Messages" tab of the window to show the progress and completion. Look at the bottom of that report to see if any errors or issues arose.


Now we have users defined and we have a database restored. Now we have combine them. The app role we defined must be given permission to work with that database. Execute SQL such as the following, after you have done your homework to study such permissions.



That's it. Your database should be back in order. Tables and data should be there. Type "table my_table_name_;" in the SQL window to see all rows, as a quick test. Any functions, domains, etc. you defined should be intact.

Alternative: pg_restore

Instead of this backup-destroy-restore process, you can go another route. Postgres has a nifty pg_restore feature for doing major upgrades in place. I've no experience with this yet.


Uninstalling Postgres 9.4

My reminder checklist for uninstalling Postgres 9.4 beta versions. These instructions assume the use of the Postgres installers provided to the community as a courtesy by EnterpriseDB. For more info, read this answer in StackOverflow.com.


First backup. Use pgAdmin app to select the desired database(s). On each database context-click to choose the Backup command. Go with default options, including "Custom" format which is a strange name for the native binary format. Choosing "Plain Text" creates SQL statements which is interesting but verbose and slow. The only setting you set is to click the button to choose a folder and specify a desired name for the backup file. Include the ".backup" extension yourself. While most Mac apps are built to add an extension, pgAdmin does not.

Run Uninstaller App

In the root folder (not your user home folder), look in Library folder to find the PosgreSQL folder. So that would be: /Library/PostgreSQL/ path. In there find one or more versions of Postgres. Within a version find the app named uninstall-postgresql. Run that app, supply your system password.

Delete Config File

Delete this file:/etc/postgres-reg.ini.

Delete User

The installer created a Unix user to your Mac named 'postgres'. If eradicating Postgres, you may want to delete that user account. For re-installing a new Beta versions of Postgres, I don't bother. The installer seems to tolerate that extant account.

Delete Apps

Check your usual Applications folder. If the PostgreSQL folder remains there, delete it.

Delete pgAdmin Preferences

If you used the pgAdmin app for administering your databases, its preferences file remains. No big deal. I don't know where it lives. Perhaps the Google would tell you.

No Longer Used

In the old days you would delete the file /etc/sysctl.conf. I do not find that file with Postgres 9.4. I suspect the reason is that 9.4 changed dramatically. Previously a Unix setting was needed on your Mac to enlarge shared buffers. Memory for the database cache is now done differently, so that setting is no longer needed. And therefore that configuration file is no longer needed. I have not confirmed this theory, just a guess on my part.


Track Date-Time Of Row Creation & Modification In Postgres

Here is some example code for using Postgres to automatically track when a row is added to a table and when a row is modified.

Solution requires three steps.

STEP 1 — Columns

First, create a column of type `TIMESTAMP WITH TIME ZONE`. Do not abbreviate to `TIMESTAMP` as that is defined by the SQL spec as `TIMESTAMP WITHOUT TIME ZONE` which is almost certainly not what you want, as explained here by Postgres expert David E. Wheeler.

Name the column exactly the same on each table you want updated. To track creation and mod date-times, I use the names:
  • row_created_
  • row_modified_
In my own names, I avoid abbreviations. Also, I always include a trailing underscore to avoid collision with reserved keywords, as suggested by the SQL spec. 

Those columns are marked `NOT NULL`. Each has a `DEFAULT` of the current moment, set by calling `CLOCK_TIMESTAMP()`. Note that Postgres has three kinds of "now": (1) The actual current moment read from the computer’s clock, (2) When the statement started execution, (3) When the transaction started execution. One could arguably choose any of those three for their own creation and mod values.

You may or may not want a default value for your `row_modified_` column. Some people argue for the precision of the semantics that a new record has not yet been modified. My counter-argument is two-fold: (a) That means allowing NULL values, and I am of the camp believing NULL to be the work of the devil, (b) In my experience, I rarely look at creation date-time, but just scan the mod column where I then find missing values (nulls) to be distracting/confusing.

STEP 2 — Function

Secondly, define a function to generically update any table as long as that table has a column named exactly as we expect.

A "function" is an old-fashioned name for a chunk of server-side code to be executed at run-time. We define this method as a database object, designating a name and so on just like a table or column.

To write a function, we need a programming language more powerful/flexible than SQL. Postgres is capable of running any number of programming languages on the server side, including Java, Perl, Python, and so on. But one language was created expressly for use within Postgres: PL/pgSQL. This language is nearly always included with any Postgres installation ("cluster" in Postgres lingo) whereas the other languages may not be installed by default. Note how our code below declares the language of the function.

The keyword `NEW` provides the generic ability we need to address any table. When the trigger causes the function to run, the word `NEW` represents the table being updated.

Our function calls another function, one of many date-time functions built into Postgres. As mentioned above, these functions vary.

STEP 3 — Trigger

To run that function, we must define a trigger. A trigger is a rule living on the Postgres server that says a function should be run upon certain events happening. In our case, the event we care about is when a record is being modified (`UPDATE` in SQL terminology).

Postgres allows a trigger to run before the row in the actual database is affected by an event operation, or after the event operation. In our case we want to run the trigger before the record is actually updated. Running our trigger after an update would cause an endless loop as our act of setting the current date-time on the `row_modified_` column would be another update which would necessitate our trigger running again, and again, and again forever.

Example Code

Here is some example code  showing the above three steps.

All three steps can be rolled into a single SQL operation inside a transaction. Note the `BEGIN` and `COMMIT` for the transaction boundaries.


ALTER TABLE customer_
   ADD COLUMN row_modified_ TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT clock_timestamp();

ALTER TABLE invoice_
   ADD COLUMN row_modified_ TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT clock_timestamp();

ALTER TABLE customer_

ALTER TABLE invoice_

CREATE OR REPLACE FUNCTION update_row_modified_function_()
    -- ASSUMES the table has a column named exactly "row_modified_".
    -- Fetch date-time of actual current moment from clock, rather than start of statement or start of transaction.
    NEW.row_modified_ = clock_timestamp(); 
language 'plpgsql';

CREATE TRIGGER row_mod_on_customer_trigger_
ON customer_ 
EXECUTE PROCEDURE update_row_modified_function_();

CREATE TRIGGER row_mod_on_invoice_trigger_
ON invoice_ 
EXECUTE PROCEDURE update_row_modified_function_();



Homework For Tim Cook

After many frustrating hours wasted on many interactions with Apple for submitting an iOS app to the App Store, I have this wish:

Lock Tim Cook in a room alone with nothing but:

  • A cup of coffee.
  • A MacBook running:
    • Xcode with a completely built iOS app.
    • Safari with a pair of windows for:
      • The iTunesConnect site.
      • The developer.apple.com site. 
  • A relief bucket.

Do not let him leave the room until the app has been successfully submitted to the App Store.


Postgres User For App

Installing Postgres means creating a new operating-system (Unix) user, by default named "postgres".

At the same time a superuser is created within the Postgres environment by the same name. This superuser can do anything, including dropping a table and even deleting an entire database (catalog).

Postgres experts commonly suggest that you create a new user with most but not all of the powers of the superuser. Creation and deletion of databases should be omitted. This is the basic administrator user that you use typically use in day-to-day work. This admin user is what you usually use as the login user in pgAdmin or your other admin tools.

When developing an app, the data-access layer will need to connect to the Postgres database as a user. Again, experts commonly suggest you create a Postgres user for this purpose. The app-user normally should not have the power to create or delete tables, as well as schema and databases. Even some individual tables may be read-only for this user, without powers to insert, update, or delete.

You may even want to create multiple app-users, each with different powers depending on what parts of the app will be engaged by the human user. For example, bookkeepers may have read-write access to tables that salespeople do not. You can enforce this access at the database engine (Postgres) as well as at your app (ex: Java & Vaadin).

Your app may be calling functions, such as the UUID-OSSP library. Those functions are protected, and you must grant permission to those as well.
For a basic app, the app-user might have CRUD access to all the tables and the functions. Here is the SQL code you must run after adding a table or function to grant powers to your app user. The code assumes you used the default schema named public, so modify as needed.



NetBeans Debugger – Show Value as 'toString'

NetBeans 8 (and earlier) has an important debugging feature hidden away: Show a variable’s value as rendered by its own toString method.

You can expose an additional column for this value in the debugger’s Variables pane. Notice the orange splotch icon tucked away by itself in the upper-right corner. Click that icon to present a Change Visible Columns dialog. Check the checkbox labeled String Value: String representation of the value.

Of course this feature requires safe-and-sane implementations of toString method on exposed objects. The risk of misbehaved implementations is presumably the reason this column is not exposed by default.

Screen shot of "Change Visible Columns" dialog box


Example Use of Java Try-With-Resource For JDBC

Here is a nice nugget of example code for how to use a JDBC PreparedStatement with the try-with-resource feature in Java. This feature automatically closes resources even if any exceptions are thrown.

Note how for a PreparedStatement we must nest one try-with-resource inside another. Apparently an exception thrown by the inner one will be caught by the outer one.

This example is taken from this answer in StackOverflow. I am posting here for my copy-paste convenience.

public List<User> getUser(int userId) {
    String sql = "SELECT id, username FROM users WHERE id = ?";
    List<User> users = new ArrayList<>();
    try (Connection con = DriverManager.getConnection(myConnectionURL);
         PreparedStatement ps = con.prepareStatement(sql);) {
        ps.setInt(1, userId);
        try (ResultSet rs = ps.executeQuery();) {
            while(rs.next()) {
                users.add(new User(rs.getInt("id"), rs.getString("name")));
    } catch (SQLException e) {
    return users;

Trailing Semicolon

And speaking of copy-paste, I can also share a small but important fact: Note the trailing semicolon on the second line of the outer "try". Two statements within the try, each terminated by a semicolon. Early drafts of this feature forbade the trailing semicolon and many folks mistakenly believe that is still the case. But in the final release the Java team realized the convenience of copy-pasting lines without having to remember to remove (or add) the trailing semicolon statement terminator. I suggest always including that optional semicolon.


Quick Start to Logging with SLF4J and Logback for a Maven-based Project

I followed this informative but slightly out -of-date article, How to setup SLF4J and LOGBack in a web app - fast, to get started with logging in my Vaadin web app. Here is my description of the same steps using NetBeans 8 with Java 8 hooked up to Tomcat 8 on a Mac mini running Mavericks.

First create your Maven-based project. In my case, I'm using version 1.1.1 of the Vaadin Plugin for NetBeans  to create a new Vaadin 7.1 project.

Add the logging façade library, SLF4J.
  1. In the NetBeans project navigator pane, context-click on the Dependencies item.
  2. Choose Add Dependency.
  3. Type: slf4j-api
  4. Open the org.slf4j : slf4j-api item to choose the latest version.
I find that repository listing in NetBeans is consistently inconsistent, in other words, psycho-crazy. Close that dialog, repeat the same steps,  and get a different list of version numbers. Sometimes you see later versions, sometimes you see only earlier versions. So check the web site for the desired dependency (SLF4J in this case) to determine the true latest version. Repeat that dialog a few times until it randomly decides to show you a version number close to the true latest. Choose that item. Later you can edit your "pom.xml" to the true latest.

In a fashion similar to SLF4J, let's add Logback. Logback is a direct implementation of that SLF4J façade. You can use nearly any other Java-based logging frameworks in conjunction with an adapter. Logback needs no adapter. Logback is the successor to Log4J, both of which were created by the same man.

We need two jars for LogBack, "classic" and "core". However, adding a dependency for "classic" will automatically get us "core".
  1. In the NetBeans project navigator pane, context-click on the Dependencies item.
  2. Choose Add Dependency.
  3. Type: logback-classic
  4. Open the ch.qos.logback : logback-classic item.
  5. Choose the latest version offered.
Again, you'll probably get a not-quite-right list of versions. Take the latest offered, and update later.

Save all your files, and do a clean-and-build of your project (Hammer & Broom icon) to get Maven to do its duty. You should find in the project navigator pane a new item Other Sources. Expand that item to find a src/main/resources item.
  1. Context-click the src/main/resources item to create a new XML file.
  2. Name the new XML file: logback.xml
  3. Into that file, past the following XML text seen below.
  4. Change the XML text, replacing "com.example" with your own project’s top package.
<?xml version="1.0" encoding="UTF-8"?>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
    <logger name="com.example" level="TRACE"/>

    <root level="debug">
        <appender-ref ref="STDOUT" />

This XML is different than that older article, having replaced the deprecated layout with an encoder. This XML configures Logback to send your logging messages to the NetBeans console. You'll need to make alterations for use when deployed to production, but this should get you started in development.

Now try it out. In your Vaadin app, locate the MyVaadinUI.java file that drives your app.
  1. Add an import: import org.slf4j.*;
  2. At the top of your class definition, add the line:
    static final Logger LOG = LoggerFactory.getLogger(MyVaadinUI.class);
  3. In the init method of this class, add the following code to see if logging works.
        LOG.trace("Yogi - Logging Test - Trace");
        LOG.debug("Yogi - Logging Test - Debug");
        LOG.info("Yogi - Logging Test - Info");
        LOG.warn("Yogi - Logging Test - Warn");
        LOG.error("Yogi - Logging Test - Error");

Run your Vaadin app. You may need to do a clean-and-build (Hammer & Broom icon). On one of the tabs in the NetBeans "Output" console pane you should see your messages.

To update the version numbers of SLF4J and Logback, look in the NetBeans project navigator pane. Expand the Project Files item to locate and open "pom.xml" file. Search for "slf4j" and "logback" and update each one’s version tag with the number you know to be current. Saving the XML file may cause Maven to do its duty. If not, try a clean-and-build.

Caveat: I am a Maven and SLF4J and Logback triple newbie. The above steps worked for me, but I cannot say that I understand them fully. Follow these steps at your own risk. Backup your project first.


Simple Vaadin Charts Example

While watching this video demo, on this Vaadin blog post, about the new Vaadin plugin 1.1.x for NetBeans 7 IDE, I noticed this very simple example being done with Vaadin Charts 1.1.x. Just 3 lines of code.

Vaadin Charts can be a bit overwhelming to approach because of its huge power and flexibility. So I was glad to see, and try, this little "Hello World" example.

NOTE: Vaadin Charts is a commercial product, requiring a paid license. A 30-day free trial is available.


// Chart
Chart chart = new Chart();
chart.getConfiguration().addSeries( new ListSeries( 1, 2, 3 ) );
layout.addComponent( chart );

You can simply add that code to the default app created by the Vaadin Plugin, and run.

See this YouTube video for another demo of Vaadin Charts, from their webinar.


Getting To Know "Blocks" in Objective-C

Blocks, also known as closures, are a programming/compiler trick to treat a chunk of code as on object. That chunk of code can be passed around, stored in collections, and executed now or later.

Apple has added full support for Blocks in recent releases of both Xcode versions 4 and 5. Apparently support for blocks was a major motivation for Apple to switch from the old GCC compiler technology to the modern LLVM/Clang world.

Blocks bring a new funky syntax. While truly grokking blocks may strain your brain, simply using them is not so difficult. Basically they replace some uses of Delegation and other callback techniques.

A new introduction and tutorial, Introduction to Objective-C Blocks, just appeared. This is the best written into on this topic that I've seen. The Wikipedia page on Blocks is also worth a look, as is the sibling page on Closure. Yet another good source of info is Part 2 of How to Use Blocks in iOS 5 Tutorial on the RayWenderlich.com site.