Kukai, the Tezos Wallet. Step-by-step guide to Setup and Offline Signing.



In this post I’d like to tell you about some features of my currently preferred Tezos wallet – Kukai. I’ll tell you why I like it, and I’ll show you how to use Kukai’s unique Offline Signing feature.

Why Kukai?

Let’s start with why you might want to use this wallet. I like it for the following reasons:

Works Everywhere

Kukai provides native clients for Linux, Mac and Windows, and a web client that can be conveniently accessed from your browser from anywhere with an Internet connection.

Private Keys Never Leave Your Device

The private keys are stored in a local file on your computer (or in your browser’s local storage while you use the web client), but your keys are NEVER sent over the Internet. The local wallet file containing all the sensitive data is made easily accessible to you, so that you decide how you wish to manage the security of your private key. Furthermore, the sensitive data inside this file is encrypted with a password that hopefully exists only in your head.

Offline Signing

This is the most interesting and so far, unique feature in Kukai. Offline Signing is a really simple, but powerful idea that provides the highest level of security. If you set this up correctly, the security can be as good as a hardware wallet. The usage of this feature is optional, but very interesting and will be the focus of this guide.

Why Offline Signing?

It might be helpful to talk about why Offline Signing is something you might consider using.

The idea is simple – any Internet-connected computer is at risk of being hacked. Period. Even if your computer was perfectly secured, the user never can be ;).

So even though your private keys are never sent over the Internet by the wallet on purpose, attackers can still get them from you with any number of techniques. Here are just a couple of most common examples:

  • Link manipulation: The attacker might fool you into clicking on a malicious wallet link that is slightly different from the real link. It will take you to the attacker’s website, which will look exactly like your wallet. If you don’t notice this in time, you’ll end up typing your passwords and providing your wallet file to the attacker.
  • Virus: Your computer might be infected with a key logger virus, or a browser extension that records everything you type and steals specific files from your computer. Attackers will eventually get both, your wallet file and your password.


The ONLY sure way to be safe is to never store or access your private keys on an Internet-connected computer at all.

This is exactly where Kukai’s Offline Signing feature comes in. The idea is to use one computer to create and send transactions to the Tezos blockchain, and another separate, disconnected computer to sign these transactions with your private key.

Setting up this system is very easy, as long as you have another device to dedicate to this process.


Offline Signing will require 2 computers/devices – one connected to the Internet and one offline device, used exclusively for signing Tezos transactions. I’ll designate operations on the Internet-connected machine (aka “Workstation“), with green text, and operations on the signing machine (aka “Signer“), with red text.

It is completely up to you what kind of devices and operating systems you want to use on the Workstation and Signer. My personal preference for the Signer’s OS is Lubuntu (https://lubuntu.net/), because it is very quick and easy to install and configure.

If you do end up using a Linux distro on the Signer, please make sure you have this package installed while still connected to the Internet during your install:

$ apt-get install libgconf2-4

This library is required to run Kukai’s native Linux client.

After you are done performing the initial OS install on the Signer, and possibly installing the latest updates from the Internet, disconnect Signer from the Internet. The Signer is now purely an offline machine.

Installing Kukai on Signer Machine

On the Workstation

  • Download Kukai stand-alone client from https://github.com/kukai-wallet/kukai/releases. Select the build that matches your OS.
  • Verify the checksum. This is how you ensure that nobody is messing with you and that the wallet has not been modified in any way en-route to your computer.
$ sha256sum kukai_linux_x64_1.0.3.tar.gz
012cf59820c84a7bd4f61414d02ad8196e8f4e317fa7905e81d59efc82da6901 kukai_linux_x64_1.0.3.tar.gz
  • Compare that number to the number on the download page. It must match exactly!
  • Copy kukai_linux_x64_1.0.3.tar.gz to a USB stick and place it somewhere on your Signer machine.

On Signer

  • Extract:
    $ tar zxvf kukai_linux_x64_1.0.3.tar.gz
  • And run:
    $ cd kukai_linux_x64_1.0.3/
    $ ./kukai


00 Intro


Creating or Importing a Wallet

When Kukai starts, you’ll be presented with different options to get your wallet started. In this guide I’ll assume that you’ll be importing a wallet that was created during the Tezos ICO, but other scenarios will be very similar.

  • If you have not yet activated your Tezos ICO account, do so now by selecting Activate and providing your ‘public key hash’ and your ‘activation code’ (obtained from here: https://verification.tezos.com/).

On Signer

  • Once activated, go to Import wallet -> Retrieve wallet and provide the full wallet information. After you do that correctly, Kukai will ask you to provide an additional password to encrypt your Kukai wallet, which contains your private key (among other things). This means that if someone gets a hold of your Kukai wallet file, it is still useless to them without this password. Please make sure that this password exists only in your head.

Feel free to make this password as long as you need, because humans are very bad at remembering short cryptic passwords like ‘s7ya48u1EE’, and computers are very good at cracking them. Instead, try something like ‘correct;horse;battery’ or ‘enlightened:papal:shrimp’. You’ll never forget it and its super-hard to brute-force or guess a password like that.

You’ll be presented with an Overview screen for your wallet:


02 Overview_out


Exporting Your Wallet

On Signer

The next thing to do is to export 2 versions of the wallet you just created. Go to the Backup menu in Kukai and export:

  1. The Full Wallet. This file will be called something like ‘wallet.tez’, and will contain your public and private keys. Feel free to rename it to something better. This wallet file can be used to gain full access to your tezzies, so be careful with it! Save this file somewhere on the Signer machine and maybe even back it up somewhere else for safety. But don’t stress too much – the private key in this file is encrypted with the password you selected earlier, so the file by itself is still useless without it.
  2. The View-only wallet. You’ll need to enter your wallet password and click on the Generate button. This file allows you to see your tezzies, but not actually access them, because your private key is not in this file. If someone gets a hold of it somehow, all they get is the ability to see how many tezzies you have, and what you have done with them in the past. This is the file we’ll use on the Internet-connected machine (Workstation).

Take the ‘view-only_wallet.tez’, put it on a USB stick and take it to your Workstation machine.

Import View-only wallet

Now that we have our view-only wallet, we can safely use it in Kukai web-client on the connected Workstation. It is convenient, and we no longer have to worry about getting hacked, since our private key is not stored anywhere on the connected Workstation.

On The Workstation:

– Go to https://kukai.app/
Import Wallet -> Import wallet from File and select your ‘view-only_wallet.tez’ file we brought over from the Signer machine.

Note that the Overview screen contains all of the account info that we saw on the disconnected Signer machine, and all of the operations like Send, Receive and New Account are still available, but the wallet is marked as “View-only”:


03 View Only Wallet_edit


And that’s it! Your setup is now complete.

Slinging Tezzies

Ok, let’s move some tezzies around. In this example I’ll move 500 XTZ between my own accounts. Let’s say from tz1bGHcWHMLtn7vFsJMoxri226QebeGC8zcd to KT1DQwmnBU6UoopeejTNQQDcbqeGxSVUxgMq. See the picture above for reference.

On The Workstation

  • Go to Overview -> Send
  • From: tz1bGHcWHMLtn7vFsJMoxri226QebeGC8zcd []
  • To Address: KT1DQwmnBU6UoopeejTNQQDcbqeGxSVUxgMq
  • Amount: 500
  • Click Preview -> Confirm
  • You should get a message that says: Your unsigned transaction has been created successfully




  • Download it. Let’s give it a name like ‘demo1.tzop’
  • Put ‘demo1.tzop’ on a USB stick and take it to the Signer machine.

On The Signer

  • Run the native Kukai client (if not already running):
    $ cd kukai_linux_x64_1.0.3/
    $ ./kukai
  • Your Full Wallet should already be loaded here, but if not, just go to Import wallet -> Choose File again and select the full wallet file you saved earlier.
  • Go to Offline Signing -> Sign operation (offline) -> Choose File, and select the unsigned operation file (‘demo1.tzop’).
  • Verify that what you are about to sign with your private key is correct and awesome:


07 Sign Op


  • Type in your wallet password in the Password field and click Sign…………..
  • If all went well, you’ll see a success message saying: ‘Operation successfully signed!
  • Download the signed operation file. Call it something like: ‘demo1_signed.tzop’.
  • Put it on the USB stick and take it to the Workstation.

On the Workstation

  • In Kukai, go to Offline signing -> Broadcast  -> Choose file and select ‘demo1_signed.tzop’ from the USB stick.
  • You can see what you are about to broadcast by clicking Yes on “Would you like to decode the operation?”


09 Broadcast


  • Click Broadcast. You’ll be provided with the Operation Hash for your transaction.

And you are done!

You can go to the Account menu to see the transaction. Or you can use the Block Explorer to look at it:

https://tzscan.io/<Operation Hash>

Final Word

This is clearly a somewhat lengthy process, but some amount of inconvenience is always the trade-off for extra security.

If you do lots of small operations in a day, you could optimize this workflow by creating another Full Wallet on the connected Workstation, with a small amount of tezzies in it – for convenient day-to-day tasks, and keep the majority of your Tezzies in the offline Signer wallet for any large transfers. That way if your Workstation does get compromised, you only lose a small amount of tezzies instead of everything.

I hope this guide was helpful to someone.

Guide to Delegating on Tezos Betanet



This guide is written for people that participated in the Tezos ICO, and who now wish to claim their Tezzies (XTZ) and then use them for delegation.

First of all, you need to get your Activation Code from Tezos. Please follow the intructions here: https://verification.tezos.com/

Although delegation is not mandatory, it is an easy way to passively make more XTZ with the ones you already have. If you don’t delegate, you won’t receive a share of the new XTZ created by the Delegated Proof of Stake system that Tezos runs on. This will deflate the value of your tokens compared to users who do participate.

Also, if you happen to own more than a single roll of Tezzies (10,000ꜩ), you are likely more interested in doing your own baking, rather than delegating to someone else. This guide will still be useful to you for the initial setup though.

There are 2 ways you can go about claiming and delegating your Tezzies:

Option 1: Using TezBox Wallet

The easiest way is to use a wallet, such as https://tezbox.com/. This is a very user-friendly option, but it requires you to reveal your private key to the service. If you don’t feel that trusting, read on about how to do everything yourself, which is really simple if you follow this guide.

Option 2: Running Your Own Tezos Node

This is the option the guide focuses on. The guide is written for people on Linux or Mac, but if you are on Windows, you can also follow along by installing Git Bash first (https://git-scm.com/downloads). This will give you both – access to Git and a command-line where  you can type in all the commands in this guide.

Install Docker

Follow the instructions here to install Docker for your OS: https://docs.docker.com/install/

Download the Tezos Project

If you have Git installed, you can clone tezos like this:

$ git clone https://gitlab.com/tezos/tezos.git
$ cd tezos
$ git branch betanet

If you don’t have Git, go to https://gitlab.com/tezos/tezos.git, click on the branch selector drop-down that currently says “master”, and change it to “betanet”. Now use the Download button in the upper right-hand corner to download the code.

Open Port 9732 In Your Firewall

This port is used by Tezos network protocol to connect to peers, so it needs to be open. The details of this are different depending on your setup, so left as an exercise for the reader. Just make sure that this port is open and routed to the box you are going to be running the Tezos Node on.

Run a Tezos Node

Make sure that you are in the directory where you placed the Tezos code and run

$ cd tezos/scripts/

There’s a script here called betanet.sh. We’ll use this script to interact with the Tezos node running inside a Docker container.

Lets start the node now:

$ ./betanet.sh node start

This command will do a lot of things:

  1. Download the Tezos Docker containers.
  2. Use Docker Compose to deploy the Node, the Baker, the Endorser and the Accuser as services. We are only going to use the Node in this guide, but those other services are now also ready to go, should you choose to try baking yourself.
  3. Start the Node
  4. Generate a new network identity
  5. Discover and connect to network peers
  6. Start downloading the entire blockchain

This last step will take a long time! You will just need to wait. You can monitor the progress in a couple of ways. You can see the log output from the node like this:

$ docker ps -q
$ docker logs 7c04ab2f4c5e --tail 40 -f

These commands discover the Container ID where the Tezos node is running, and then attach to the STDOUT and STDERR outputs of that container. You will now get a lot of scrolling info, telling you what the node is doing.

You can see the network connections your node has made like this:

$ ./betanet.sh client rpc get /network/connections

You can also monitor how much of the blockchain the node has downloaded so far:

$ ./betanet.sh head

This will print a lot of output, showing you the information about the top block the node has so far. The interesting part here is the “timestamp” field near the top. We can monitor that field like this:

$ watch "./betanet.sh head | grep timestamp"

We need to wait until that “timestamp” catches up with current time.

Do not proceed with the guide until that’s done!

Activate Your Account On The Blockchain

Now that your node is fully synced, we can start to inject changes into the blockchain.

First, lets create an alias for our public address. This information is found in the wallet you got during the ICO:

$ ./betanet.sh client add address ico_key tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB
$ ./betanet.sh client list known addresses
ico_key: tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB

In this case we chose the name “ico_key“, but you can call it anything you want.

And now the actual activation.  <activation_key> is provided to you when you complete the KYC process.

$ ./betanet.sh client activate fundraiser account ico_key with <activation_key>

Node is bootstrapped, ready for injecting operations.
Operation successfully injected in the node.
Operation hash: ooWpYVXe466VC48nwbiFeRR2Djeg4u3CCYkLuSoUfxfeG6TAU1w
Waiting for the operation to be included...
Operation found in block: BKivKRERjTWCWZJAYxADaFeUiA42XjYKkiet6HqNxkDNDATbMbX (pass: 2, offset: 0)
This sequence of operations was run:
Genesis account activation:
Account: tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB
Balance updates:
tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB ... +ꜩ1521

The operation has only been included 0 blocks ago.
We recommend to wait more.
Use command
tezos-client wait for ooWpYVXe466VC48nwbiFeRR2Djeg4u3CCYkLuSoUfxfeG6TAU1w to be included --confirmations 30
and/or an external block explorer.
Account ico_key (tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB) activated with ꜩ1521.

Note that there’s a command given to you in the end:

$ ./betanet.sh client wait for ooWpYVXe466VC48nwbiFeRR2Djeg4u3CCYkLuSoUfxfeG6TAU1w to be included --confirmations 30

If you run that, you’ll get a message every time your transaction is baked into a block, all the way up to 30 blocks.

You can also use the block explorer to monitor that progress. In this example, it would be here: http://tzscan.io/ooWpYVXe466VC48nwbiFeRR2Djeg4u3CCYkLuSoUfxfeG6TAU1w

Import Your Private Key

Now we are ready to access our tezzies. Of course, that will require the private key from the wallet you got during the ICO.

So, import the private key into our node:

$ ./betanet.sh client import fundraiser secret key ico_key

This will ask you some questions, including all the words in the mnemonic in the wallet. Enter all the data it asks for.

Now let’s check our work:

$ ./betanet.sh client show address ico_key -S
Hash: tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB
Public Key: <.........>
Secret Key: encrypted:<.........>

And finally, let’s check the balance in our account:

$ ./betanet.sh client get balance for ico_key
1521 ꜩ

Setting Up Delegation

We are now ready to put our tezzies to work.

The first step is to decide who you are going to Delegate your baking to. This list of Delegators here is an excellent resource to help you make the choice: https://www.mytezosbaker.com/bakers-list/.

Let’s say that we decided to go with Tz Vote: http://tzscan.io/tz1bHzftcTKZMTZgLLtnrXydCm6UEqf4ivca

Let’s create an alias for them:

$ ./betanet.sh client add address Tezos_Vote tz1bHzftcTKZMTZgLLtnrXydCm6UEqf4ivca

Now we create an “originated” smart contract called “ico_key_originated“, managed by the account we activated (called “ico_key” in this guide), and delegated to “Tezos_Vote”. We also transfer all the money from “ico_key” into the new smart contract “ico_key_originated”:

$ /betanet.sh client originate account ico_key_originated for ico_key transferring 1520.742 from ico_key --delegate Tezos_Vote --fee 0.0

Node is bootstrapped, ready for injecting operations.
Estimated storage: no bytes added
Enter password for encrypted key: 
Operation successfully injected in the node.
Operation hash: ooCj9jGio6oCMksnuZQ5EE42h93VSM3c2hRuc3z4W1XXmyyURpK
Waiting for the operation to be included...
Operation found in block: BLkvov4WBkr4hN4RTNXePRwfgj2wpvu6pUfHzr2cizGZbcXxiTt (pass: 3, offset: 0)
This sequence of operations was run:
Manager signed operations:
From: tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB
Fee to the baker: ꜩ0
Expected counter: 45247
Gas limit: 0
Storage limit: 0 bytes
Revelation of manager public key:
Contract: tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB
Key: edpku7CbCYBFhYw1NfU26sGo7asGsvZcvew1VsygxwHoWr6emY5Cq6
This revelation was successfully applied
Manager signed operations:
From: tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB
Fee to the baker: ꜩ0
Expected counter: 45248
Gas limit: 0
Storage limit: 0 bytes
From: tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB
For: tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB
Credit: ꜩ1520.742
No script (accepts all transactions)
Delegate: tz1bHzftcTKZMTZgLLtnrXydCm6UEqf4ivca
Spendable by the manager
This origination was successfully applied
Originated contracts:
Consumed gas: 0
Balance updates:
tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB ... -ꜩ0.257
tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB ... -ꜩ1520.742
KT1PUFGwJB9qtWfdbzgURni3JykVBycdwwAK ... +ꜩ1520.742

New contract KT1PUFGwJB9qtWfdbzgURni3JykVBycdwwAK originated.
The operation has only been included 0 blocks ago.
We recommend to wait more.
Use command
tezos-client wait for ooCj9jGio6oCMksnuZQ5EE42h93VSM3c2hRuc3z4W1XXmyyURpK to be included --confirmations 30
and/or an external block explorer.
Contract memorized as ico_key_originated.

The above command is certainly confusing. To understand some more details about what happened there, please refer to this excellent article:  http://archive.li/NsPFt (section “How to Delegate and Understanding Implicit and Generated Accounts”)

As with our previous injection, we can either use our node:

$ ./betanet.sh client wait for ooCj9jGio6oCMksnuZQ5EE42h93VSM3c2hRuc3z4W1XXmyyURpK to be included --confirmations 30

Or the Block Explorer: http://tzscan.io/ooCj9jGio6oCMksnuZQ5EE42h93VSM3c2hRuc3z4W1XXmyyURpK

to monitor the progress of our transaction.

There’s an important subtlety to notice here. The balance in my “ico_key” account was 1521ꜩ, yet in the command above I only transferred 1520.742ꜩ. Why is that?

Well, if we try to transfer the entire amount, we get this error:

  Unregistred error:
     { "kind": "temporary",
       "id": "proto.002-PsYLVpVv.gas_exhausted.operation" }

The problem here is that some of our tezzies need to be burned in order to pay for executing the transfer and delegation. In this case the required fee was 0.257ꜩ, which is why I only transferred 1520.742ꜩ.

So, let’s check everything now to make sure that the transfer worked, and that the delegate is established:

$ ./betanet.sh client list known contracts
ico_key_originated: KT1PUFGwJB9qtWfdbzgURni3JykVBycdwwAK
Tezos_Vote: tz1bHzftcTKZMTZgLLtnrXydCm6UEqf4ivca
ico_key: tz1NQo6LNh4isv8Gavc53EGy5TozLRCAkXzB

$ ./betanet.sh client get balance for ico_key
0.001 ꜩ

$ ./betanet.sh client get balance for ico_key_originated
1520.742 ꜩ

$ ./betanet.sh client get delegate for ico_key_originated
tz1bHzftcTKZMTZgLLtnrXydCm6UEqf4ivca (known as Tezos_Vote)

And that’s it.

Resources and References

Here’s the list of most useful materials that I used while figuring this out:

Easy Way to Remove DRM Encryption from Kindle Books

Calibre + DeDRM plugin are the best way to remove DRM from your books and convert them to other formats. Trouble is, these tools are difficult to setup, especially on Linux. It can easily take days of effort to figure out all the details to get it working.

To address this problem I’ve made a Docker image that automates all of the setup details, so you don’t need to worry about any of it.

Follow the simple instructions here, and regain ownership of your books in minutes (assuming you have Docker already installed):


Can a Truck Boil a Kettle?

This post will deviate from computer-related posts and document a life-hack calculation instead. This question came up when my girlfriend and I were packing for a road trip in our F-150 truck. We were thinking of bringing an electric kettle with us, but we wanted to know if the truck’s battery would be up to the task of boiling the kettle. This post documents the calculations for posterity.

F-150 Battery

Voltage : 12V
Capacity: 72Ah
Cranking Amps: 900A (we can safely draw 900A for 30s, with voltage staying above 7.2V)

The Kettle

Average Power Usage: 2200W
Time to boil: 200s

Battery Usage

Amount of Energy needed: 2200W * (1h * 200s / 3600s) = 122.22Wh
Plus inefficiency of the inverter (85%): 122.22Wh / 0.85 = 143.78Wh
Amp-hours of battery needed: 143.78Wh / 12V = 11.98Ah
So, boiling a kettle drains: 11.98Ah / 72Ah = 16.7% of battery


Kettle will continuously draw:

11.98Ah / (1h * 200s / 3600s) = 217.82 Amps

If we assume a linear degradation of voltage, then time to 7.2V (based on battery’s Cold Cranking Amps (CCA)) at 217.82A draw is:

30s / (217.82A/900A) = 123.95 seconds

So the battery can supply the necessary current for 123.95 seconds, but we need 200 seconds to boil the kettle. So, even though we only need 16.7% of battery, the rate of consumption takes the battery voltage below 7.2V for

(200s - 123.95s) = 76.05 seconds


Boiling a kettle uses 16.7% of battery capacity, but for the last 76 seconds the battery is pushed into operating outside of the designed parameters due to rate of discharge. Don’t know how serious or not serious that is. Please comment if you know more! We used the kettle in the truck, and it worked, so this calculation is supported by empirical evidence :).

Setting Up Sublime Text for Javascript Development

Sublime Text is one of the best light-weight text editors in existence right now. It is a favourite of many developers, b/c of its beautiful UI, speed and a diverse ecosystem of extensions. This article explains how to use Sublime Text to create a powerful, yet light-weight Javascript development environment.

At the end of these instructions, you’ll be able to do the following in Sublime:

  • Choose from many beautiful syntax highlight themes for your code, be it Javascript, JSON, HTML, S(CSS), Vue, etc.
  • Automatically lint and format your code as you type, or on every save.
  • Work with version control using Git from Sublime.
  • Execute Javascript code in current file and get immediate results
  • Transpile Javascript code in current file into ES5

These instructions show the paths and versions as they were on my Linux machine. Please use your common sense to change these appropriately for your environment.

Installing Prerequisite Tools

Install NVM

Follow the instructions here: https://github.com/creationix/nvm#installation-and-update, but it will be something like the following:

$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash
$ nvm ls-remote --lts
$ nvm install v10.15.3 <=== Selecting Latest LTS here!

Install ESLint

$ npm install -g eslint

Install Babel

$ npm install -g @babel/core @babel/cli @babel/preset-env

Install Prettier

$ npm install -g prettier

Install VueJS

$ npm install -g @vue/cli

Sublime Setup

  • Go to Tools -> Command Palette (Learn to use “Ctrl-Shift-P” – you will use it very frequently, b/c it’s awesome!), Type: ‘Install’, select ‘Install Package Control
  • Repeat this step for all the packages listed below. Press Ctrl-Shift-P, and Type: ‘Install Package‘, and then install the following packages:
      • SublimeLinter
      • Babel
      • Pretty JSON
      • JsPrettier
      • SublimeLinter-eslint
      • ESLint-Formatter
      • Autocomplete Javascript with Method Signature
      • HTML-CSS-JS Prettify
      • Vue Syntax Highlight
      • SideBar Enhancements
      • GitGutter
      • Bracket​Highlighter
      • Markdown Preview
      • MarkdownLivePreview
      • SublimeLinter-java

Package Configuration

Global ESLint Settings

Create ~/.eslintrc.js file. ESLint will look there for settings when it cannot find a local project-specific config file.

module.exports = {
    "env": {
        "node": true,
        "browser": true,
        "es6": true
    "extends": "eslint:recommended",
    "globals": {
        "Atomics": "readonly",
        "SharedArrayBuffer": "readonly"
    "parserOptions": {
        "ecmaVersion": 2018,
        "sourceType": "module"
    "rules": {
        "no-console": "off",
        "indent": ["error", 2]

Configure Babel

Go to Preferences -> Package Settings -> Babel -> Settings - Default. Update the path:

  "debug": false,
  "use_local_babel": true,
  "node_modules": {
    "linux": "/home/val/.nvm/versions/node/v10.15.3/lib/node_modules/@babel/core/node_modules"
  "options": {}

Configure ESLint Formatter

Go to Preferences -> Package Settings -> ESLint Formatter -> Settings. Update the paths:

  "node_path": {
    "linux": "/home/val/.nvm/versions/node/v10.15.3/bin/node"

  "eslint_path": {
    "linux": "/home/val/.nvm/versions/node/v10.15.3/bin/eslint"

  // Automatically format when a file is saved.
  "format_on_save": true,

Configure JsPrettier

Go to Preferences -> Package Settings -> JsPrettier -> Settings - Default. Update the path:

"prettier_cli_path": "/home/val/.nvm/versions/node/v10.15.3/bin/prettier",
    "node_path": "/home/val/.nvm/versions/node/v10.15.3/bin/node",

"disable_tab_width_auto_detection": true,

Configure GLobal Theme

Go to Preferences -> Theme -> select 'Adaptive.sublime-theme'

Configure JS Prettify

Go to Preferences -> Package Settings -> HTML / CSS / JS Prettify -> Plugin Options - Default. Update the path:

        "linux": "/home/val/.nvm/versions/node/v10.15.3/bin/node"

Configure Global Settings

Go to Preferences -> Settings. Add:

    "ensure_newline_at_eof_on_save": true,
    "translate_tabs_to_spaces": true,
    "trim_trailing_white_space_on_save": true

You should now have everything you need for Javascript development. Note that we have installed multiple formatters that do the same thing. This is b/c they are good at different things, and this gives you options. You’ll need to explore the capabilities by opening a source file, selecting some text (or not), and using the Command Palette (Ctrl-Shift-P) to type things like “Format” or “lint” or “Pretty” or “JSON” or “Syntax“, to see what capabilities you have, and which packages you like best for which tasks.


Setting up in-place REPL and Transpile

The following steps will allow you to run any javascript code in the currently open file immediately. You will also be able to transpile the code into ES5 javascript instead of running it.

First, create a Babel config file: ~/sublime-babelrc.js

module.exports = {
    "presets": [
        ["/home/val/.nvm/versions/node/v10.15.3/lib/node_modules/@babel/preset-env", {
            "targets": {
                "ie": 10
            "modules": false

Go to Tools -> Build System -> New Build System... Paste this into the file and save with a name like "JavaScript Builder.sublime-build":

    "selector": "source.js",

    "variants": [
            "name": "Run JavaScript",
            "cmd": ["node", "$file"]

            "name": "Transpile to ES5",
            "cmd": ["babel", "--config-file=/home/val/sublime-babelrc.js", "$file"]

Whenever you are in a JavaScript file, you can now press Ctrl-Shift-B, and select one of your build variants. Ctrl-B will run the last variant you selected.

Guide to Selective Routing Through a VPN

This guide explains how to use a VPN service to selectively route torrent traffic only, leaving all other Internet access on the computer unaffected.

This guide assumes a Linux workstation. Sorry Windows users, but I have no idea if something like this is possible on your OS.

Key Concepts

This solution makes use of 2 key concepts:

  1. The Linux kernel supports multiple routing tables (http://linux-ip.net/html/routing-tables.html).
  2. IP packets can be easily marked with a unique marker, and routed to a specific routing table based on that marker.


This implementation is based on creating a dedicated user that will be used for running your torrent software. Then we use iptables to mark all packets originating from that user with our special marker that will route the traffic to our special routing table that will send all traffic through the VPN. That way all other users remain unaffected.

In my case I decided to use NordVPN (https://nordvpn.com). Any other VPN provider will work, as long as it is possible for you to connect using the amazing OpenVPN software (https://openvpn.net/).

So the first step is to make sure that you can connect to your VPN with OpenVPN. The instructions for NordVPN are found here: https://nordvpn.com/tutorials/linux/openvpn/

After you have that working, proceed to the next step.


First, we need to comment out any authentication options specified in the OpenVPN config file you downloaded from your VPN provider. In my case, I had to comment out the ‘auth-user-pass‘ directive, because I didn’t want OpenVPN to ask me for my credentials every time. I want the connection to authenticate automatically, so I opened the config file for my selected server and commented out the line:



We’ll specify a different way to authenticate later.

Next, we need to change a couple of kernel parameters so that we can route the packets the way we want. We’ll need to edit /etc/sysctl.conf and add the following lines:

net.ipv4.ip_forward = 1             # Enable IP Forwarding
net.ipv4.conf.default.rp_filter = 0 # Disable Source Route Path Filtering
net.ipv4.conf.all.rp_filter = 0     # Disable Source Route Path Filtering on All interfaces

and then run:

sudo sysctl -p /etc/sysctl.conf

to activate your new settings.

Next, create a user just for your torrent program. You’ll also want to make sure that your torrent program is always started by this user. The reason for this is that iptables has an “owner” packet matching plugin which matches all outgoing packets belonging to a specific UID. You can use this to put a mark on all packets belonging to that user which can be used by the kernel to route those packets through a specific interface.

In this guide we’ll assume that the user is called ‘torrents.

Startup Scripts

You are now ready to copy-and-paste some startup scripts that will set everything up for you when your computer boots.


openvpn \
    --log-append  /var/log/openvpn/nordvpn-client.log \
    --route-noexec \
    --script-security 2 \
    --up-delay \
    --up /etc/openvpn/nord_vpn_callback_up.sh \
    --auth-user-pass /etc/openvpn/nord_vpn_auth.txt \
    --config /etc/openvpn/ovpn_tcp/ca306.nordvpn.com.tcp.ovpn

Note: The last line is where you specify the OpenVPN config file for the specific VPN server you wish to use.



# User defined config
export USER_NAME='torrents'        # VPN will be enabled only for this user
export PACKET_MARKER=3             # Arbitrary packet marker<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>
export ROUTING_TABLE_NUMBER=200    # Arbitrary routing table number

# These enviroment variables are set by OpenVPN.
# See manpage 'Environmental Variables' section.
export TUN_DEV=$dev
export VPN_LOCAL_IP=$ifconfig_local
export VPN_GATEWAY_IP=$route_vpn_gateway



. /etc/openvpn/nord_vpn_common_setup.sh

echo "================================================"
echo "       --- Hooking up to Nord VPN ---"
echo "================================================"
echo "VPN Interface: $TUN_DEV"
echo "VPN Local IP: $VPN_LOCAL_IP"
echo "VPN Gateway IP: $VPN_GATEWAY_IP"

if [ -z "$VPN_GATEWAY_IP" ] ; then
    echo "ERROR: Not all expected parameters were present!\n"
    exit 1

# Just in case we didn't clean up before.
echo "\nCleaning up from previous run..."
ip rule delete fwmark $PACKET_MARKER
ip route flush table $ROUTING_TABLE_NUMBER

# Attach a marker to all packets coming from processes owned by the user
echo "\nInserting iptables rules..."
iptables -t mangle -A OUTPUT -m owner --uid-owner $USER_NAME -j MARK --set-mark $PACKET_MARKER

# Everything that leaves over the VPN's TUN device should have the source address set currectly.
# Apparently some torrent clients mistakenly grab the address from eth0 instead, which makes
# the VPN drop those packets. This corrects any such packets with bad source address.
iptables -t nat -A POSTROUTING -o $TUN_DEV -j SNAT --to-source $VPN_LOCAL_IP

iptables -t mangle -L OUTPUT

iptables -t nat -L POSTROUTING

# Everything that is marked with $PACKET_MARKER should be routed to our custom routing table
echo "\nForwading all packets marked with $PACKET_MARKER to the new routing table $ROUTING_TABLE_NUMBER...."
ip rule add fwmark $PACKET_MARKER lookup $ROUTING_TABLE_NUMBER
ip rule

# All traffic destined for the LAN should go over eth0, not the VPN
echo "\nRouting LAN packets to eth0..."
ip route add dev eth0 table $ROUTING_TABLE_NUMBER

# Everything else should be routed via the VPN device
echo "\nRouting all external traffic to the VPN Gateway $VPN_GATEWAY_IP..."
ip route add default via $VPN_GATEWAY_IP dev $TUN_DEV table $ROUTING_TABLE_NUMBER 

if [ $? -ne 0 ]; then
    echo "ERROR: Could not add default route $VPN_GATEWAY_IP!\n"
    exit 2

# Show the new routing table
ip route list table $ROUTING_TABLE_NUMBER

echo "\nVPN is connected!\n"
ifconfig $TUN_DEV


Your Username
Your Password



. /etc/openvpn/nord_vpn_common_setup.sh

echo "================================================"
echo "       --- Cleaning up after Nord VPN ---"
echo "================================================"

echo "Removing iptables rules..."
iptables -t mangle -D OUTPUT -m owner --uid-owner $USER_NAME -j MARK --set-mark $PACKET_MARKER
iptables -t nat -D POSTROUTING -o $TUN_DEV -j SNAT --to-source $VPN_LOCAL_IP

iptables -t mangle -L OUTPUT

iptables -t nat -L POSTROUTING

echo "\nRemoving custom routing table $ROUTING_TABLE_NUMBER...\n"
ip rule delete fwmark $PACKET_MARKER
ip route flush table $ROUTING_TABLE_NUMBER

ip route
ip rule

NOTE: This script is provided for your convenience when you wish to shut down the VPN manually for some reason. We are purposefully not attaching it to OpenVPN’s down-hook. This provides us with our own “kill switch” functionality. If the VPN goes down for whatever reason, your torrents will stop working, rather than smoothly transitioning to non-encrypted downloads without telling you.


echo "Starting NordVPN..."
/etc/openvpn/nord_vpn_start.sh &

exit 0

The above is one of many ways to start your VPN at boot.

And that’s it. Hope it helps.


Everything in this guide is based on the following Reddit post. All I did here is provide the actual scripts to make the setup easier.

JPA callbacks with Hibernate’s SessionFactory and no EntityManager

I wanted to use JPA callback annotations, such as @PostLoad
and @PostUpdate, but realized that those JPA annotations do not work, unless Hibernate is configured to use a JPA EnityManager. My project uses Hibernate’s SessionFactory, so these annotations are not available to me out of the box.

So, how do we configure Hibernate to get the best of both worlds? Here’s how I did it in Hibernate 5. Hibernate 4 can use a very similar approach, but the code would be slightly different (just grab it from org.hibernate.jpa.event.spi.JpaIntegrator)

Luckily, Hibernate’s IntegratorServiceImpl uses the java.util.ServiceLoader API, so we can specify an additional list of org.hibernate.integrator.spi.Integrator implementations we want the SessionFactory to use.

All we need to do is specify a service provider for org.hibernate.integrator.spi.Integrator in:


# This allows us to use JPA-style annotation on entities, such as @PostLoad

You will also need to ensure that ‘hibernate-entitymanager‘ jar of the appropriate version is on your classpath.

our.custom.JpaAnnotationsIntegrator (taken from org.hibernate.jpa.event.spi.JpaIntegrator):

package our.custom;

import org.hibernate.annotations.common.reflection.ReflectionManager;
import org.hibernate.boot.Metadata;
import org.hibernate.boot.internal.MetadataImpl;
import org.hibernate.engine.spi.SessionFactoryImplementor;
import org.hibernate.event.service.spi.EventListenerRegistry;
import org.hibernate.event.spi.EventType;
import org.hibernate.integrator.spi.Integrator;
import org.hibernate.jpa.event.internal.core.JpaPostDeleteEventListener;
import org.hibernate.jpa.event.internal.core.JpaPostInsertEventListener;
import org.hibernate.jpa.event.internal.core.JpaPostLoadEventListener;
import org.hibernate.jpa.event.internal.core.JpaPostUpdateEventListener;
import org.hibernate.jpa.event.internal.jpa.CallbackBuilderLegacyImpl;
import org.hibernate.jpa.event.internal.jpa.CallbackRegistryImpl;
import org.hibernate.jpa.event.spi.jpa.CallbackBuilder;
import org.hibernate.jpa.event.spi.jpa.ListenerFactory;
import org.hibernate.jpa.event.spi.jpa.ListenerFactoryBuilder;
import org.hibernate.mapping.PersistentClass;
import org.hibernate.service.spi.SessionFactoryServiceRegistry;

 * This integrator allows us to use JPA-style post op annotations on Hibernate entities.

 * This integrator is loaded by <code>org.hibernate.integrator.internal.IntegratorServiceImpl</code> from
 * <code>META-INF/services/org.hibernate.integrator.spi.Integrator</code> file.

 * <b>Note</b>: This code is lifted directly from <code>org.hibernate.jpa.event.spi.JpaIntegrator</code>
 * @author Val Blant
public class JpaAnnotationsIntegrator implements Integrator {
	private ListenerFactory jpaListenerFactory;
	private CallbackBuilder callbackBuilder;
	private CallbackRegistryImpl callbackRegistry;

	public void integrate(Metadata metadata, SessionFactoryImplementor sessionFactory, SessionFactoryServiceRegistry serviceRegistry) {
		final EventListenerRegistry eventListenerRegistry = serviceRegistry.getService( EventListenerRegistry.class );

		this.callbackRegistry = new CallbackRegistryImpl();

		// post op listeners
		eventListenerRegistry.prependListeners( EventType.POST_DELETE, new JpaPostDeleteEventListener(callbackRegistry) );
		eventListenerRegistry.prependListeners( EventType.POST_INSERT, new JpaPostInsertEventListener(callbackRegistry) );
		eventListenerRegistry.prependListeners( EventType.POST_LOAD, new JpaPostLoadEventListener(callbackRegistry) );
		eventListenerRegistry.prependListeners( EventType.POST_UPDATE, new JpaPostUpdateEventListener(callbackRegistry) );

		// handle JPA "entity listener classes"...
		final ReflectionManager reflectionManager = ( (MetadataImpl) metadata )

		this.jpaListenerFactory = ListenerFactoryBuilder.buildListenerFactory( sessionFactory.getSessionFactoryOptions() );
		this.callbackBuilder = new CallbackBuilderLegacyImpl( jpaListenerFactory, reflectionManager );
		for ( PersistentClass persistentClass : metadata.getEntityBindings() ) {
			if ( persistentClass.getClassName() == null ) {
				// we can have non java class persisted by hibernate
			callbackBuilder.buildCallbacksForEntity( persistentClass.getClassName(), callbackRegistry );

	public void disintegrate(SessionFactoryImplementor sessionFactory, SessionFactoryServiceRegistry serviceRegistry) {
		if ( callbackRegistry != null ) {
		if ( callbackBuilder != null ) {
		if ( jpaListenerFactory != null ) {


How to use JSF libraries without packaging them as JARs during development



JSF spec allows us to place JSF configuration documents, such as faces-config.xml and *taglib.xml either inside WEB-INF/ of our WAR, or in META-INF/ of JARs included in WEB-INF/lib of our WAR. For JSF annotated classes, they can either be in WEB-INF/classes, or in the included JARs.

But what if we want all these things to work properly without having to package all our JSF dependency projects as jars? Naturally, we never want to deploy like that, but during development it would be really nice, b/c then we could actually make changes to any code inside our JSF dependencies with full hot-swap support, without having to package anything, or to restart the application server! Unfortunately, this is not possible with JSF out-of-the-box…

This article describes a technique I used to work around these limitations of JSF, thus gaining the ability to make direct modifications to my JSF libraries without restarting or repackaging, and achieving the state of coding zen :).

This solution was tested with Mojarra JavaServer Faces 2.1.7, and it is intended to work with Eclipse workspaces. There would probably be small differences in the implementation for other configurations, but the general approach should work everywhere.


We have 3 problems to solve:

1) Picking up JSF Annotated Classes from other JSF projects in the workspace

This turned out to be the hardest problem to solve.

Normally JSF annotated classes (such as @FacesComponent, @FacesConverter, @FacesRenderer, etc) must be inside a JAR, or in /WEB-INF/classes/. What we need is to pick up annotated classes from other Eclipse projects we depend on, which means that they need to be loaded from our Web Project’s classpath.

There is no way to extend JSF to do this, b/c everything inside AnnotationScanTask and ProvideMetadataToAnnotationScanTask is hard coded. In order to make the necessary changes, we’ll need some AspectJ magic.

The idea is to use Load Time Weaving to advise the call to JavaClassScanningAnnotationScanner.getAnnotatedClasses() and merge results from our own annotation scan with the results coming from the stock JSF implementation.

This can be achieved with a simple aspect, and some code to scan for annotated classes, which is the first part of our solution. I am using Google Reflections here to do the annotation scan inside the packages where I know my JSF libraries will be. Modify this for your own needs.


 * This is an AspectJ shim used to find more JSF annotated classes during the setup process. 
 * Normally, JSF configuration and JSF annotations are only processed on paths inside our own WAR, and from other jars.
 * However, in development mode we are interested in linking to DryDock dependencies as local Eclipse projects, rather than jars.
 * This shim provides a missing extension point, which scans the DryDock project classpath for JSF annotations.

 * The other part of this solution is found in <code>EclipseProjectJsfResourceProvider</code>

 * Since we are weaving JSF, Load Time Weaving is required, which means that this aspect must be declared in <code>META-INF/aop.xml</code>.
 * Also, Tomcat must be started with:

 *  -javaagent:/fullpath/aspectjweaver-version.jar -classpath /fullpath/aspectjrt-version.jar
 * </pre>

 * @see EclipseProjectJsfResourceProvider
 * @author Val Blant
public aspect JsfConfigurationShimForEclipseProjectsAspect {

	pointcut sortedFacesDocumentsPointcut() : execution(* ConfigManager.sortDocuments(..));
	after() returning (DocumentInfo[] sortedFacesDocuments): sortedFacesDocumentsPointcut() {
		System.out.println("\n ====== Augmented list of JSF config files detected with JsfConfigurationShimForEclipseProjectsAspect ====== ");
		for ( DocumentInfo doc : sortedFacesDocuments ) {

	pointcut getAnnotatedClassesPointcut(Set<URI> urls) : execution(* JavaClassScanningAnnotationScanner.getAnnotatedClasses(Set<URI>)) && args(urls);
	Map<Class<? extends Annotation>, Set<Class<?>>> around(Set<URI> urls): getAnnotatedClassesPointcut(urls)  {

		Map<Class<? extends Annotation>, Set<Class<?>>> oldMap = proceed(urls);
		Map<Class<? extends Annotation>, Set<Class<?>>> newMap = EclipseJsfDryDockProjectAnnotationScanner.getAnnotatedClasses();
		Map<Class<? extends Annotation>, Set<Class<?>>> mergedMap = new AnnotatedJsfClassMerger().merge(oldMap, newMap);

		return mergedMap;



 * Scans DryDock project classpath to find any JSF annotated classes. This scanner is activated by 
 * the <code>JsfConfigurationShimForEclipseProjectsAspect</code>, which requires Load Time Weaving.

 * This class should only be used in development! It is part of a solution that allows us to run the app
 * against locally imported DryDocked projects.
 * @see JsfConfigurationShimForEclipseProjectsAspect
 * @see EclipseProjectJsfResourceProvider
 * @author Val Blant
public class EclipseJsfDryDockProjectAnnotationScanner extends AnnotationScanner {
	private static final Log log = LogFactory.getLog(EclipseJsfDryDockProjectAnnotationScanner.class);
	private static Reflections reflections = new Reflections( 
			new ConfigurationBuilder()

	public EclipseJsfDryDockProjectAnnotationScanner(ServletContext sc) {
	public static Map<Class<? extends Annotation>, Set<Class<?>>> getAnnotatedClasses() {
		Map<Class<? extends Annotation>, Set<Class<?>>> annotatedClassMap = new HashMap<>();
		for ( Class<? extends Annotation> annotation : FACES_ANNOTATION_TYPE ) {
			Set<Class<?>> annotatedClasses = reflections.getTypesAnnotatedWith(annotation);
			if ( !annotatedClasses.isEmpty() ) {
				Set<Class<?>> classes = annotatedClassMap.get(annotation);
				if ( classes == null ) {
					classes = new HashSet<Class<?>>();
					annotatedClassMap.put(annotation, classes);
		log.info(" ====== Found additional JSF annotated classes from Eclipse classpath ====== \n" + annotatedClassMap);
		return annotatedClassMap;

	public Map<Class<? extends Annotation>, Set<Class<?>>> getAnnotatedClasses(Set<URI> urls) {
		return getAnnotatedClasses();



 * Merges 2 maps of JSF annotated classes into one map.
 * This class should only be used in development! It is part of a solution that allows us to run the app
 * against locally imported DryDocked projects.
 * @see JsfConfigurationShimForEclipseProjectsAspect
 * @see EclipseProjectJsfResourceProvider
 * @author Val Blant
public class AnnotatedJsfClassMerger {
	public Map<Class<? extends Annotation>, Set<Class<?>>> merge(
				Map<Class<? extends Annotation>, Set<Class<?>>> oldMap,
				Map<Class<? extends Annotation>, Set<Class<?>>> newMap) {
		Set<Class<? extends Annotation>> annotations = new HashSet<>();
		Map<Class<? extends Annotation>, Set<Class<?>>> mergedMap = new HashMap<>();
		for ( Class<? extends Annotation> annotation : annotations ) {
			Set<Class<?>> classes = new HashSet<>();
			Set<Class<?>> oldClasses = oldMap.get(annotation);
			Set<Class<?>> newClasses = newMap.get(annotation);
			if ( oldClasses != null ) classes.addAll(oldClasses);
			if ( newClasses != null ) classes.addAll(newClasses);
			mergedMap.put(annotation, classes);
		return mergedMap;


Next, we need to properly set up the Load Time Weaver.

First we create src/main/resources/META-INF/aop.xml in our Web Project.


<!-- This file is read by AspectJ weaver java agent. Make sure you specify the following on server startup command line: -javaagent:/fullpath/AgriShare/aspectjweaver-version.jar -classpath /fullpath/AgriShare/aspectjrt-version.jar Also, make sure that you actually compile the aspects specified below. Eclipse can't do it! You'll have to use Gradle for that. -->

   <aspect name="ca.gc.pinss.web.jsf.drydock.eclipse.JsfConfigurationShimForEclipseProjectsAspect"/>
 <weaver options="-verbose -showWeaveInfo -XnoInline">
 	<include within="com.sun.faces.config.*"/>

Now we need to make sure that we start our application with the AspectJ weaver.

  • Append the following to your Application Server’s startup JVM parameters:

Note: Use the correct path for your machine!

  • Make sure that this jar is first on your Application Server’s classpath:

Note: Use the correct path for your machine!

And that’s it – now your annotated JSF classes will be picked up directly from project classpath!

To make sure that it is working, look for messages from EclipseJsfDryDockProjectAnnotationScanner in the log. It will have the following heading:

 ====== Found additional JSF annotated classes from Eclipse classpath ======

You should also see some messages from the AspectJ weaver:

[WebappClassLoader@6426a58b] weaveinfo Join point 'method-execution(
com.sun.faces.config.DocumentInfo[] com.sun.faces.config.ConfigManager.sortDocuments(com.sun.faces.config.DocumentInfo[], com.sun.faces.config.FacesConfigInfo))'
in Type 'com.sun.faces.config.ConfigManager' (ConfigManager.java:503) 
advised by afterReturning advice from 'ca.gc.pinss.web.jsf.drydock.eclipse.JsfConfigurationShimForEclipseProjectsAspect' (JsfConfigurationShimForEclipseProjectsAspect.aj:36)
[WebappClassLoader@6426a58b] weaveinfo Join point 'method-execution(
java.util.Map com.sun.faces.config.JavaClassScanningAnnotationScanner.getAnnotatedClasses(java.util.Set))' 
 in Type 'com.sun.faces.config.JavaClassScanningAnnotationScanner' (JavaClassScanningAnnotationScanner.java:121) 
 advised by around advice from 
'ca.gc.pinss.web.jsf.drydock.eclipse.JsfConfigurationShimForEclipseProjectsAspect' (JsfConfigurationShimForEclipseProjectsAspect.aj:45)

2) Picking up Taglibs from other JSF Projects in the Workspace

This one is easy in comparison.

All we need to do here is to specify an additional custom FacesConfigResourceProvider.


 * This custom resource provider is used for finding JSF Resources located in other Eclipse Projects, rather 
 * than jars. JSF spec does not support this, but it is very useful for running DryDocked projects inside the local Eclipse workspace.

 * In order to enable this resource provider, this class's name must be specified in 
 * <code>META-INF/services/com.sun.faces.spi.FacesConfigResourceProvider</code>

 * <b>NOTE:</b> The Gradle build will not include the com.sun.faces.spi.FacesConfigResourceProvider file, b/c we never want this 
 * customization to be deployed - it's for development only.
 * @see JsfConfigurationShimForEclipseProjectsAspect
 * @author Val Blant
public class EclipseProjectJsfResourceProvider implements FacesConfigResourceProvider {
	private static final Log log = LogFactory.getLog(EclipseProjectJsfResourceProvider.class);
	public Collection<URI> getResources(ServletContext context) {
		List<URI> unsortedResourceList = new ArrayList<URI>();

        try {
            for (URI uri : loadURLs(context)) {
            	if ( !uri.toString().contains(".jar!/") ) {
                   unsortedResourceList.add(0, uri);
        } catch (IOException e) {
            throw new FacesException(e);

        List<URI> result = new ArrayList<>();
        // Then load the unsorted resources
		log.info(" ====== Found additional JSF configuration resources on Eclipse classpath ====== \n" + result);

        return result;
    private Collection<URI> loadURLs(ServletContext context) throws IOException {

        Set<URI> urls = new HashSet<URI>();
        try {

// Turns out these are already grabbed by MetaInfFacesConfigResourceProvider, so we don't need to do it again	
//            for (Enumeration<URL> e = Util.getCurrentLoader(this).getResources("META-INF/faces-config.xml"); e.hasMoreElements();) {
//                    urls.add(new URI(e.nextElement().toExternalForm()));
//            }
            URL[] urlArray = Classpath.search("META-INF/", ".taglib.xml");
            for (URL cur : urlArray) {
                urls.add(new URI(cur.toExternalForm()));
        } catch (URISyntaxException ex) {
            throw new IOException(ex);
        return urls;


To register this provider, we add the following into our Web Project:



Note: Use the correct package name for your project!

3) Picking up Facelet includes and resources from OTHER JSF PROJECTS IN THE WORKSPACE

This one is also easy.

We create a custom Facelets ResourceResolver.


 * This is a special Facelets ResourceResolver, which allows us to ui:include resources from
 * the classpath, rather than from jars. This is necessary in for the Incubator to see stuff
 * in other projects under META-INF/resources/ 
 * @author Val Blant
public class ClasspathResourceResolver extends DefaultResourceResolver {
	 * First check the context root, then the classpath
    public URL resolveUrl(String path) {
        URL url = super.resolveUrl(path);
        if (url == null) {
            /* classpath resources don't start with /, so this must be a jar include. Convert it to classpath include. */
            if (path.startsWith("/")) {
                path = "META-INF/resources" + path;
            url = Thread.currentThread().getContextClassLoader().getResource(path);
        return url;

Now we register it in our web.xml:

	<!-- This allows us to "ui:include" resources from the classpath, rather than from jars, which is important for working with DryDocked projects directly from our Eclipse workspace -->

And that’s it! We now have everything we need to load all JSF resources from Eclipse projects instead of JARs.

Eclipse Project Setup

All that remains is to reconfigure the Eclipse workspace to start using our new capabilities.

  1. Import your JSF library projects and all their dependencies into your Eclipse workspace together with the Web Application you are working on.
  2. Go to all projects that have dependencies on common component jars, delete the jar dependencies, and replace them with project dependencies that are now in your workspace.
  3. Get rid of any test related project exports from the library projects that might interfere with the running of the app. This may not be necessary depending on your configuration.
  4. Configure your Application Server classpath to use the Eclipse Projects instead of JARs.
  5. Configure your build scripts to turn off these modifications, so they don’t get deployed anywhere past your development machine. This is as simple as not including META-INF/services/com.sun.faces.spi.FacesConfigResourceProvider and META-INF/aop.xml in your WAR.

And that’s it.

How to Save HDS Flash Streams from any web page


, ,

I came across a Flash video that I was not able to save with any Video Downloader app, including the ones that actually sniff traffic on your network adapter, such as Replay Media Catcher and many others.

Turns out that this particular page was using the new Adobe HTTP Dynamic Streaming (HDS) technology. With HDS, the original MP4 or FLV file is split up into many F4F segments, which are then served to the media player on the page one after the other, so there is no single video file to download like with most other video streaming technologies.

You can easily check if HDS is being used by using Firefox to watch the video.

  1. Clear Firefox cache (Tools -> Options -> Network, Clear Cached Web Content, Clear User Data)
  2. Load the page with the video
  3. Open a new tab and browse to about:cache?storage=disk
  4. Search for a bunch of files that have the word ‘Frag’ in them. They’ll look something like this:

These are all the F4F fragments of the video. You could download them all and combine them together, but that’s not the best way to do this.

There is a script called AdobeHDS.php, which can automate the download process for you if you provide it with the F4M Manifest for the stream. You can download the script from https://github.com/K-S-V/Scripts

This manifest file is easy to obtain, b/c it is delivered via a plain GET request that is issued before the video starts playing. To find the URL:

  1. Open Firefox Console (Ctrl+Shift+K) or Tools -> Web Developer -> Web Console
  2. Make sure that “Net” filter is selected
  3. Clear the Console
  4. Open the video page and let the video load
  5. In the Filter text box type “f4m” and you should now see a few F4M requests. You want the first one, which will probably be called “manifest.f4m“. Mine looked like this:
GET http://capi.9c9media.com/destinations/ctvnews_web/platforms/desktop/contents/540901/contentpackages/546418/stacks/1130329/manifest.f4m

Now just run the script with the manifest URL and you should get the re-combined flv file:

$ php AdobeHDS.php --delete --manifest "http://capi.9c9media.com/destinations/ctvnews_web/platforms/desktop/contents/540901/contentpackages/546418/stacks/1130329/manifest.f4m"
 KSV Adobe HDS Downloader

Processing manifest info.... 
Quality Selection: 
 Available: 2048 1856 1536 1280 896 640 480 299
 Selected : 2048 
Fragments Total: 55, First: 1, Start: 1, Parallel: 8 
Downloading 55/55 fragments 
Found 55 fragments 

You should now have an FLV file waiting for you in the script directory.

For Mac Users

Posting some info from a comment by Eric L. Pheterson below:

To add a few more baby steps (for Mac users) :

  • When you view the AdobeHDS.php file at Sourceforge, copy/paste it into a file, and name it AdobeHDS.php
  • PHP should be installed alreadyon your mac
  • A dependency of AdobeHDS is not installed, so in Terminal run :
brew install homebrew/php/php55-mcrypt
  • After installing mcrypt, you must open a new terminal window or tab to use it
  • If you don’t have brew installed, in Terminal run :
/usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)”
  • After installing brew, run
brew update