view mxtool/ @ 8439:39c7142e7aef

added support for projects that extend a package defined in another project when canonicalizing projects
author Doug Simon <>
date Fri, 22 Mar 2013 11:48:42 +0100
parents d1d486c03e8a
children b6b9ab1fde62
line wrap: on
line source
# ----------------------------------------------------------------------------------------------------
# Copyright (c) 2007, 2012, Oracle and/or its affiliates. All rights reserved.
# This code is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 2 only, as
# published by the Free Software Foundation.
# This code is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
# version 2 for more details (a copy is included in the LICENSE file that
# accompanied this code).
# You should have received a copy of the GNU General Public License version
# 2 along with this work; if not, write to the Free Software Foundation,
# Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
# Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
# or visit if you need additional information or have any
# questions.
# ----------------------------------------------------------------------------------------------------

mx is a command line tool inspired by mvn (
and hg ( It includes a mechanism for
managing the dependencies between a set of projects (like Maven)
as well as making it simple to run commands
(like hg is the interface to the Mercurial commands).

The organizing principle of mx is a project suite. A suite is a directory
containing one or more projects. It's not coincidental that this closely
matches the layout of one or more projects in a Mercurial repository.
The configuration information for a suite lives in an 'mx' sub-directory
at the top level of the suite.

When launched, mx treats the current working directory as a suite.
This is the primary suite. All other suites are called included suites.

The configuration files (i.e. in the 'mx' sub-directory) of a suite are:

      Defines the projects and libraries in the suite and the
      dependencies between them.
      Suite specific extensions to the commands available to mx.

      Other suites to be loaded. This is recursive. Each
      line in an includes file is a path to a suite directory.

      A set of environment variable definitions. These override any
      existing environment variables. Common properties set here

The includes and env files are typically not put under version control
as they usually contain local file-system paths.

The projects file is like the pom.xml file from Maven except that
it is a properties file (not XML). Each non-comment line
in the file specifies an attribute of a project or library. The main
difference between a project and a library is that the former contains
source code built by the mx tool where as the latter is an external
dependency. The format of the projects file is

Library specification format:


Built-in library properties (* = required):

        The file system path for the library to appear on a class path.

        A comma separated list of URLs from which the library (jar) can
        be downloaded and saved in the location specified by 'path'.

        If "true" then this library will be omitted from a class path
        if it doesn't exist on the file system and no URLs are specified.

        The file system path for a jar file containing the library sources.

        A comma separated list of URLs from which the library source jar can
        be downloaded and saved in the location specified by 'sourcePath'.

Project specification format:


The name of a project also denotes the directory it is in.

Built-in project properties (* = required):

        The sub-directory of the suite in which the project directory is
        contained. If not specified, the project directory is directly
        under the suite directory.

        A comma separated list of source directory names (relative to
        the project directory).

        A comma separated list of the libraries and project the project
        depends upon (transitive dependencies should be omitted).

        The project whose Checkstyle configuration
        (i.e. <project>/.checkstyle_checks.xml) is used.

        "true" if the project is native.

        The minimum JDK version (format: x.y) to which the project's
        sources comply (required for non-native projects).

Other properties can be specified for projects and libraries for use
by extension commands.

Property values can use environment variables with Bash syntax (e.g. ${HOME}).

import sys, os, errno, time, subprocess, shlex, types, urllib2, contextlib, StringIO, zipfile, signal, xml.sax.saxutils, tempfile
import shutil, fnmatch, re, xml.dom.minidom
from collections import Callable
from threading import Thread
from argparse import ArgumentParser, REMAINDER
from os.path import join, basename, dirname, exists, getmtime, isabs, expandvars, isdir, isfile

DEFAULT_JAVA_ARGS = '-ea -Xss2m -Xmx1g'

_projects = dict()
_libs = dict()
_dists = dict()
_suites = dict()
_mainSuite = None
_opts = None
_java = None

A distribution is a jar or zip file containing the output from one or more Java projects.
class Distribution:
    def __init__(self, suite, name, path, deps):
        self.suite = suite = name
        self.path = path.replace('/', os.sep)
        if not isabs(self.path):
            self.path = join(suite.dir, self.path)
        self.deps = deps
        self.update_listeners = set()
    def __str__(self):
    def add_update_listener(self, listener):
    def notify_updated(self):
        for l in self.update_listeners:
A dependency is a library or project specified in a suite.
class Dependency:
    def __init__(self, suite, name): = name
        self.suite = suite

    def __str__(self):

    def __eq__(self, other):
        return ==

    def __ne__(self, other):
        return !=

    def __hash__(self):
        return hash(

    def isLibrary(self):
        return isinstance(self, Library)

class Project(Dependency):
    def __init__(self, suite, name, srcDirs, deps, javaCompliance, d):
        Dependency.__init__(self, suite, name)
        self.srcDirs = srcDirs
        self.deps = deps
        self.checkstyleProj = name
        self.javaCompliance = JavaCompliance(javaCompliance) if javaCompliance is not None else None
        self.native = False
        self.dir = d
        # Create directories for projects that don't yet exist
        if not exists(d):
        for s in self.source_dirs():
            if not exists(s):

    def all_deps(self, deps, includeLibs, includeSelf=True, includeAnnotationProcessors=False):
        Add the transitive set of dependencies for this project, including
        libraries if 'includeLibs' is true, to the 'deps' list.
        childDeps = list(self.deps)
        if includeAnnotationProcessors and hasattr(self, 'annotationProcessors') and len(self.annotationProcessors) > 0:
            childDeps = self.annotationProcessors + childDeps
        if self in deps:
            return deps
        for name in childDeps:
            assert name !=
            dep = _libs.get(name, None)
            if dep is not None:
                if includeLibs and not dep in deps:
                dep = _projects.get(name, None)
                if dep is None:
                    if name in _opts.ignored_projects:
                        abort('project named ' + name + ' required by ' + + ' is ignored')
                    abort('dependency named ' + name + ' required by ' + + ' is not found')
                if not dep in deps:
                    dep.all_deps(deps, includeLibs=includeLibs, includeAnnotationProcessors=includeAnnotationProcessors)
        if not self in deps and includeSelf:
        return deps

    def _compute_max_dep_distances(self, name, distances, dist):
        currentDist = distances.get(name);
        if currentDist is None or currentDist < dist:
            distances[name] = dist
            p = project(name, False)
            if p is not None:
                for dep in p.deps:
                    self._compute_max_dep_distances(dep, distances, dist + 1)

    def canonical_deps(self):
        Get the dependencies of this project that are not recursive (i.e. cannot be reached
        via other dependencies).
        distances = dict()
        result = set()
        self._compute_max_dep_distances(, distances, 0)
        for n,d in distances.iteritems():
            assert d > 0 or n ==
            if d == 1:

        if len(result) == len(self.deps) and frozenset(self.deps) == result:
            return self.deps
        return result;

    def max_depth(self):
        Get the maximum canonical distance between this project and its most distant dependency.
        distances = dict()
        self._compute_max_dep_distances(, distances, 0)
        return max(distances.values())        

    def source_dirs(self):
        Get the directories in which the sources of this project are found.
        return [join(self.dir, s) for s in self.srcDirs]

    def source_gen_dir(self):
        Get the directory in which source files generated by the annotation processor are found/placed.
        if self.native:
            return None
        return join(self.dir, 'src_gen')

    def output_dir(self):
        Get the directory in which the class files of this project are found/placed.
        if self.native:
            return None
        return join(self.dir, 'bin')

    def jasmin_output_dir(self):
        Get the directory in which the Jasmin assembled class files of this project are found/placed.
        if self.native:
            return None
        return join(self.dir, 'jasmin_classes')

    def append_to_classpath(self, cp, resolve):
        if not self.native:

    def find_classes_with_matching_source_line(self, pkgRoot, function, includeInnerClasses=False):
        Scan the sources of this project for Java source files containing a line for which
        'function' returns true. The fully qualified class name of each existing class
        corresponding to a matched source file is returned in a list.
        classes = []
        pkgDecl = re.compile(r"^package\s+([a-zA-Z_][\w\.]*)\s*;$")
        for srcDir in self.source_dirs():
            outputDir = self.output_dir()
            for root, _, files in os.walk(srcDir):
                for name in files:
                    if name.endswith('.java') and name != '':
                        matchFound = False
                        with open(join(root, name)) as f:
                            pkg = None
                            for line in f:
                                if line.startswith("package "):
                                    match = pkgDecl.match(line)
                                    if match:
                                        pkg =
                                if function(line.strip()):
                                    matchFound = True
                                if pkg and matchFound:
                        if matchFound:
                            basename = name[:-len('.java')]
                            assert pkg is not None
                            if pkgRoot is None or pkg.startswith(pkgRoot):
                                pkgOutputDir = join(outputDir, pkg.replace('.', os.path.sep))
                                for e in os.listdir(pkgOutputDir):
                                    if includeInnerClasses:
                                        if e.endswith('.class') and (e.startswith(basename) or e.startswith(basename + '$')):
                                            classes.append(pkg + '.' + e[:-len('.class')])
                                    elif e == basename + '.class':
                                        classes.append(pkg + '.' + basename)
        return classes
    def _init_packages_and_imports(self):
        if not hasattr(self, '_defined_java_packages'):
            packages = set()
            extendedPackages = set()
            depPackages = set()
            for d in self.all_deps([], includeLibs=False, includeSelf=False):
            imports = set()
            importRe = re.compile(r'import\s+(?:static\s+)?([^;]+);')
            for sourceDir in self.source_dirs():
                for root, _, files in os.walk(sourceDir):
                    javaSources = [name for name in files if name.endswith('.java')]
                    if len(javaSources) != 0:
                        pkg = root[len(sourceDir) + 1:].replace(os.sep,'.')
                        if not pkg in depPackages:
                            # A project extends a package already defined by one of it dependencies
                        for n in javaSources:
                            with open(join(root, n)) as fp:
                                content =
            self._defined_java_packages = frozenset(packages)
            self._extended_java_packages = frozenset(extendedPackages)
            importedPackages = set()
            for imp in imports:
                name = imp
                while not name in depPackages and len(name) > 0:
                    lastDot = name.rfind('.')
                    if lastDot == -1:
                        name = None
                    name = name[0:lastDot]
                if name is not None:
            self._imported_java_packages = frozenset(importedPackages)
    def defined_java_packages(self):
        """Get the immutable set of Java packages defined by the Java sources of this project"""
        return self._defined_java_packages
    def extended_java_packages(self):
        """Get the immutable set of Java packages extended by the Java sources of this project"""
        return self._extended_java_packages

    def imported_java_packages(self):
        """Get the immutable set of Java packages defined by other Java projects that are
           imported by the Java sources of this project."""
        return self._imported_java_packages

class Library(Dependency):
    def __init__(self, suite, name, path, mustExist, urls, sourcePath, sourceUrls):
        Dependency.__init__(self, suite, name)
        self.path = path.replace('/', os.sep)
        self.urls = urls
        self.mustExist = mustExist
        self.sourcePath = sourcePath
        self.sourceUrls = sourceUrls
        for url in urls:
            if url.endswith('/') != self.path.endswith(os.sep):
                abort('Path for dependency directory must have a URL ending with "/": path=' + self.path + ' url=' + url)

    def get_path(self, resolve):
        path = self.path
        if not isabs(path):
            path = join(self.suite.dir, path)
        if resolve and self.mustExist and not exists(path):
            assert not len(self.urls) == 0, 'cannot find required library ' + + ' ' + path;
            print('Downloading ' + + ' from ' + str(self.urls))
            download(path, self.urls)
        return path

    def get_source_path(self, resolve):
        path = self.sourcePath
        if path is None:
            return None
        if not isabs(path):
            path = join(self.suite.dir, path)
        if resolve and len(self.sourceUrls) != 0 and not exists(path):
            print('Downloading sources for ' + + ' from ' + str(self.sourceUrls))
            download(path, self.sourceUrls)
        return path

    def append_to_classpath(self, cp, resolve):
        path = self.get_path(resolve)
        if exists(path) or not resolve:

class Suite:
    def __init__(self, d, primary):
        self.dir = d
        self.projects = []
        self.libs = []
        self.dists = []
        self.includes = []
        self.commands = None
        self.primary = primary
        mxDir = join(d, 'mx')

    def _load_projects(self, mxDir):
        libsMap = dict()
        projsMap = dict()
        distsMap = dict()
        projectsFile = join(mxDir, 'projects')
        if not exists(projectsFile):
        with open(projectsFile) as f:
            for line in f:
                line = line.strip()
                if len(line) != 0 and line[0] != '#':
                    key, value = line.split('=', 1)

                    parts = key.split('@')

                    if len(parts) == 2:
                    if len(parts) != 3:
                        abort('Property name does not have 3 parts separated by "@": ' + key)
                    kind, name, attr = parts
                    if kind == 'project':
                        m = projsMap
                    elif kind == 'library':
                        m = libsMap
                    elif kind == 'distribution':
                        m = distsMap
                        abort('Property name does not start with "project@", "library@" or "distribution@": ' + key)

                    attrs = m.get(name)
                    if attrs is None:
                        attrs = dict()
                        m[name] = attrs
                    value = expandvars_in_property(value)
                    attrs[attr] = value

        def pop_list(attrs, name):
            v = attrs.pop(name, None)
            if v is None or len(v.strip()) == 0:
                return []
            return [n.strip() for n in v.split(',')]

        for name, attrs in projsMap.iteritems():
            srcDirs = pop_list(attrs, 'sourceDirs')
            deps = pop_list(attrs, 'dependencies')
            ap = pop_list(attrs, 'annotationProcessors')
            #deps += ap
            javaCompliance = attrs.pop('javaCompliance', None)
            subDir = attrs.pop('subDir', None);
            if subDir is None:
                d = join(self.dir, name)
                d = join(self.dir, subDir, name)
            p = Project(self, name, srcDirs, deps, javaCompliance, d)
            p.checkstyleProj = attrs.pop('checkstyle', name)
            p.native = attrs.pop('native', '') == 'true'
            if not p.native and p.javaCompliance is None:
                abort('javaCompliance property required for non-native project ' + name)
            if len(ap) > 0:
                p.annotationProcessors = ap

        for name, attrs in libsMap.iteritems():
            path = attrs.pop('path')
            mustExist = attrs.pop('optional', 'false') != 'true'
            urls = pop_list(attrs, 'urls')
            sourcePath = attrs.pop('sourcePath', None)
            sourceUrls = pop_list(attrs, 'sourceUrls')
            l = Library(self, name, path, mustExist, urls, sourcePath, sourceUrls)

        for name, attrs in distsMap.iteritems():
            path = attrs.pop('path')
            deps = pop_list(attrs, 'dependencies')
            d = Distribution(self, name, path, deps)

    def _load_commands(self, mxDir):
        commands = join(mxDir, '')
        if exists(commands):
            # temporarily extend the Python path
            sys.path.insert(0, mxDir)
            mod = __import__('commands')
            sys.modules[join(mxDir, 'commands')] = sys.modules.pop('commands')

            # revert the Python path
            del sys.path[0]

            if not hasattr(mod, 'mx_init'):
                abort(commands + ' must define an mx_init(env) function')
            if hasattr(mod, 'mx_post_parse_cmd_line'):
                self.mx_post_parse_cmd_line = mod.mx_post_parse_cmd_line

            self.commands = mod

    def _load_includes(self, mxDir):
        includes = join(mxDir, 'includes')
        if exists(includes):
            with open(includes) as f:
                for line in f:
                    include = expandvars_in_property(line.strip())
                    _loadSuite(include, False)

    def _load_env(self, mxDir):
        e = join(mxDir, 'env')
        if exists(e):
            with open(e) as f:
                for line in f:
                    line = line.strip()
                    if len(line) != 0 and line[0] != '#':
                        key, value = line.split('=', 1)
                        os.environ[key.strip()] = expandvars_in_property(value.strip())

    def _post_init(self, opts):
        mxDir = join(self.dir, 'mx')
        for p in self.projects:
            existing = _projects.get(
            if existing is not None:
                abort('cannot override project  ' + + ' in ' + p.dir + " with project of the same name in  " + existing.dir)
            if not in _opts.ignored_projects:
                _projects[] = p
        for l in self.libs:
            existing = _libs.get(
            if existing is not None:
                abort('cannot redefine library  ' +
            _libs[] = l
        for d in self.dists:
            existing = _dists.get(
            if existing is not None:
                abort('cannot redefine distribution  ' +
            _dists[] = d
        if hasattr(self, 'mx_post_parse_cmd_line'):

class XMLElement(xml.dom.minidom.Element):
    def writexml(self, writer, indent="", addindent="", newl=""):
        writer.write(indent+"<" + self.tagName)

        attrs = self._get_attributes()
        a_names = attrs.keys()

        for a_name in a_names:
            writer.write(" %s=\"" % a_name)
            xml.dom.minidom._write_data(writer, attrs[a_name].value)
        if self.childNodes:
            if not self.ownerDocument.padTextNodeWithoutSiblings and len(self.childNodes) == 1 and isinstance(self.childNodes[0], xml.dom.minidom.Text):
                # if the only child of an Element node is a Text node, then the
                # text is printed without any indentation or new line padding
                writer.write("</%s>%s" % (self.tagName,newl))
                for node in self.childNodes:
                writer.write("%s</%s>%s" % (indent,self.tagName,newl))

class XMLDoc(xml.dom.minidom.Document):

    def __init__(self):
        self.current = self
        self.padTextNodeWithoutSiblings = False

    def createElement(self, tagName):
        # overwritten to create XMLElement
        e = XMLElement(tagName)
        e.ownerDocument = self
        return e

    def comment(self, txt):

    def open(self, tag, attributes={}, data=None):
        element = self.createElement(tag)
        for key, value in attributes.items():
            element.setAttribute(key, value)
        self.current = element
        if data is not None:
        return self

    def close(self, tag):
        assert self.current != self
        assert tag == self.current.tagName, str(tag) + ' != ' + self.current.tagName
        self.current = self.current.parentNode
        return self

    def element(self, tag, attributes={}, data=None):
        return, attributes, data).close(tag)

    def xml(self, indent='', newl='', escape=False):
        assert self.current == self
        result = self.toprettyxml(indent, newl, encoding="UTF-8")
        if escape:
            entities = { '"':  "&quot;", "'":  "&apos;", '\n': '&#10;' }
            result = xml.sax.saxutils.escape(result, entities)
        return result

def get_os():
    Get a canonical form of sys.platform.
    if sys.platform.startswith('darwin'):
        return 'darwin'
    elif sys.platform.startswith('linux'):
        return 'linux'
    elif sys.platform.startswith('sunos'):
        return 'solaris'
    elif sys.platform.startswith('win32') or sys.platform.startswith('cygwin'):
        return 'windows'
        abort('Unknown operating system ' + sys.platform)

def _loadSuite(d, primary=False):
    mxDir = join(d, 'mx')
    if not exists(mxDir) or not isdir(mxDir):
        return None
    if not _suites.has_key(d):
        suite = Suite(d, primary)
        _suites[d] = suite
        return suite

def suites():
    Get the list of all loaded suites.
    return _suites.values()

def projects():
    Get the list of all loaded projects.
    return _projects.values()

def distribution(name, fatalIfMissing=True):
    Get the distribution for a given name. This will abort if the named distribution does
    not exist and 'fatalIfMissing' is true.
    d = _dists.get(name)
    if d is None and fatalIfMissing:
        abort('distribution named ' + name + ' not found')
    return d

def project(name, fatalIfMissing=True):
    Get the project for a given name. This will abort if the named project does
    not exist and 'fatalIfMissing' is true.
    p = _projects.get(name)
    if p is None and fatalIfMissing:
        if name in _opts.ignored_projects:
            abort('project named ' + name + ' is ignored')
        abort('project named ' + name + ' not found')
    return p

def library(name, fatalIfMissing=True):
    Gets the library for a given name. This will abort if the named library does
    not exist and 'fatalIfMissing' is true.
    l = _libs.get(name)
    if l is None and fatalIfMissing:
        abort('library named ' + name + ' not found')
    return l

def _as_classpath(deps, resolve):
    cp = []
    if _opts.cp_prefix is not None:
        cp = [_opts.cp_prefix]
    for d in deps:
        d.append_to_classpath(cp, resolve)
    if _opts.cp_suffix is not None:
        cp += [_opts.cp_suffix]
    return os.pathsep.join(cp)

def classpath(names=None, resolve=True, includeSelf=True, includeBootClasspath=False):
    Get the class path for a list of given projects, resolving each entry in the
    path (e.g. downloading a missing library) if 'resolve' is true.
    if names is None:
        result = _as_classpath(sorted_deps(includeLibs=True), resolve)
        deps = []
        if isinstance(names, types.StringTypes):
            project(names).all_deps(deps, True, includeSelf)
            for n in names:
                project(n).all_deps(deps, True, includeSelf)
        result = _as_classpath(deps, resolve)
    if includeBootClasspath:
        result = os.pathsep.join([java().bootclasspath(), result])
    return result

def classpath_walk(names=None, resolve=True, includeSelf=True, includeBootClasspath=False):
    Walks the resources available in a given classpath, yielding a tuple for each resource
    where the first member of the tuple is a directory path or ZipFile object for a
    classpath entry and the second member is the qualified path of the resource relative
    to the classpath entry.
    cp = classpath(names, resolve, includeSelf, includeBootClasspath)
    for entry in cp.split(os.pathsep):
        if not exists(entry):
        if isdir(entry):
            for root, dirs, files in os.walk(entry):
                for d in dirs:
                    entryPath = join(root[len(entry) + 1:], d)
                    yield entry, entryPath
                for f in files:
                    entryPath = join(root[len(entry) + 1:], f)
                    yield entry, entryPath
        elif entry.endswith('.jar') or entry.endswith('.zip'):
            with zipfile.ZipFile(entry, 'r') as zf:
                for zi in zf.infolist():
                    entryPath = zi.filename
                    yield zf, entryPath

def sorted_deps(projectNames=None, includeLibs=False, includeAnnotationProcessors=False):
    Gets projects and libraries sorted such that dependencies
    are before the projects that depend on them. Unless 'includeLibs' is
    true, libraries are omitted from the result.
    deps = []
    if projectNames is None:
        projects = _projects.values()
        projects = [project(name) for name in projectNames]

    for p in projects:
        p.all_deps(deps, includeLibs=includeLibs, includeAnnotationProcessors=includeAnnotationProcessors)
    return deps

class ArgParser(ArgumentParser):

    # Override parent to append the list of available commands
    def format_help(self):
        return ArgumentParser.format_help(self) + _format_commands()

    def __init__(self):
        self.java_initialized = False
        ArgumentParser.__init__(self, prog='mx')

        self.add_argument('-v', action='store_true', dest='verbose', help='enable verbose output')
        self.add_argument('-V', action='store_true', dest='very_verbose', help='enable very verbose output')
        self.add_argument('--dbg', type=int, dest='java_dbg_port', help='make Java processes wait on <port> for a debugger', metavar='<port>')
        self.add_argument('-d', action='store_const', const=8000, dest='java_dbg_port', help='alias for "-dbg 8000"')
        self.add_argument('--cp-pfx', dest='cp_prefix', help='class path prefix', metavar='<arg>')
        self.add_argument('--cp-sfx', dest='cp_suffix', help='class path suffix', metavar='<arg>')
        self.add_argument('--J', dest='java_args', help='Java VM arguments (e.g. --J @-dsa)', metavar='@<args>', default=DEFAULT_JAVA_ARGS)
        self.add_argument('--Jp', action='append', dest='java_args_pfx', help='prefix Java VM arguments (e.g. --Jp @-dsa)', metavar='@<args>', default=[])
        self.add_argument('--Ja', action='append', dest='java_args_sfx', help='suffix Java VM arguments (e.g. --Ja @-dsa)', metavar='@<args>', default=[])
        self.add_argument('--user-home', help='users home directory', metavar='<path>', default=os.path.expanduser('~'))
        self.add_argument('--java-home', help='JDK installation directory (must be JDK 6 or later)', metavar='<path>')
        self.add_argument('--ignore-project', action='append', dest='ignored_projects', help='name of project to ignore', metavar='<name>', default=[])
        if get_os() != 'windows':
            # Time outs are (currently) implemented with Unix specific functionality
            self.add_argument('--timeout', help='Timeout (in seconds) for command', type=int, default=0, metavar='<secs>')
            self.add_argument('--ptimeout', help='Timeout (in seconds) for subprocesses', type=int, default=0, metavar='<secs>')

    def _parse_cmd_line(self, args=None):
        if args is None:
            args = sys.argv[1:]

        self.add_argument('commandAndArgs', nargs=REMAINDER, metavar='command args...')

        opts = self.parse_args()

        # Give the timeout options a default value to avoid the need for hasattr() tests
        opts.__dict__.setdefault('timeout', 0)
        opts.__dict__.setdefault('ptimeout', 0)

        if opts.very_verbose:
            opts.verbose = True

        if opts.java_home is None:
            opts.java_home = os.environ.get('JAVA_HOME')

        if opts.java_home is None or opts.java_home == '':
            abort('Could not find Java home. Use --java-home option or ensure JAVA_HOME environment variable is set.')

        if opts.user_home is None or opts.user_home == '':
            abort('Could not find user home. Use --user-home option or ensure HOME environment variable is set.')

        os.environ['JAVA_HOME'] = opts.java_home
        os.environ['HOME'] = opts.user_home

        opts.ignored_projects = opts.ignored_projects + os.environ.get('IGNORED_PROJECTS', '').split(',')

        commandAndArgs = opts.__dict__.pop('commandAndArgs')
        return opts, commandAndArgs

def _format_commands():
    msg = '\navailable commands:\n\n'
    for cmd in sorted(commands.iterkeys()):
        c, _ = commands[cmd][:2]
        doc = c.__doc__
        if doc is None:
            doc = ''
        msg += ' {0:<20} {1}\n'.format(cmd, doc.split('\n', 1)[0])
    return msg + '\n'

def java():
    Get a JavaConfig object containing Java commands launch details.
    assert _java is not None
    return _java

def run_java(args, nonZeroIsFatal=True, out=None, err=None, cwd=None):
    return run(java().format_cmd(args), nonZeroIsFatal=nonZeroIsFatal, out=out, err=err, cwd=cwd)

def _kill_process_group(pid):
    pgid = os.getpgid(pid)
        os.killpg(pgid, signal.SIGKILL)
        return True
        log('Error killing subprocess ' + str(pgid) + ': ' + str(sys.exc_info()[1]))
        return False

def _waitWithTimeout(process, args, timeout):
    def _waitpid(pid):
        while True:
                return os.waitpid(pid, os.WNOHANG)
            except OSError, e:
                if e.errno == errno.EINTR:

    def _returncode(status):
        if os.WIFSIGNALED(status):
            return -os.WTERMSIG(status)
        elif os.WIFEXITED(status):
            return os.WEXITSTATUS(status)
            # Should never happen
            raise RuntimeError("Unknown child exit status!")

    end = time.time() + timeout
    delay = 0.0005
    while True:
        (pid, status) = _waitpid(
        if pid ==
            return _returncode(status)
        remaining = end - time.time()
        if remaining <= 0:
            abort('Process timed out after {0} seconds: {1}'.format(timeout, ' '.join(args)))
        delay = min(delay * 2, remaining, .05)

# Makes the current subprocess accessible to the abort() function
# This is a tuple of the Popen object and args.
_currentSubprocess = None

def waitOn(p):
    if get_os() == 'windows':
        # on windows use a poll loop, otherwise signal does not get handled
        retcode = None
        while retcode == None:
            retcode = p.poll()
        retcode = p.wait()
    return retcode

def run(args, nonZeroIsFatal=True, out=None, err=None, cwd=None, timeout=None):
    Run a command in a subprocess, wait for it to complete and return the exit status of the process.
    If the exit status is non-zero and `nonZeroIsFatal` is true, then mx is exited with
    the same exit status.
    Each line of the standard output and error streams of the subprocess are redirected to
    out and err if they are callable objects.

    assert isinstance(args, types.ListType), "'args' must be a list: " + str(args)
    for arg in args:
        assert isinstance(arg, types.StringTypes), 'argument is not a string: ' + str(arg)

    if _opts.verbose:
        if _opts.very_verbose:
            log('Environment variables:')
            for key in sorted(os.environ.keys()):
                log('    ' + key + '=' + os.environ[key])
        log(' '.join(args))

    if timeout is None and _opts.ptimeout != 0:
        timeout = _opts.ptimeout

    global _currentSubprocess

        # On Unix, the new subprocess should be in a separate group so that a timeout alarm
        # can use os.killpg() to kill the whole subprocess group
        preexec_fn = None
        creationflags = 0
        if get_os() == 'windows':
            creationflags = subprocess.CREATE_NEW_PROCESS_GROUP
            preexec_fn = os.setsid

        if not callable(out) and not callable(err) and timeout is None:
            # The preexec_fn=os.setsid
            p = subprocess.Popen(args, cwd=cwd, preexec_fn=preexec_fn, creationflags=creationflags)
            _currentSubprocess = (p, args)
            retcode = waitOn(p)
            def redirect(stream, f):
                for line in iter(stream.readline, ''):
            stdout=out if not callable(out) else subprocess.PIPE
            stderr=err if not callable(err) else subprocess.PIPE
            p = subprocess.Popen(args, cwd=cwd, stdout=stdout, stderr=stderr, preexec_fn=preexec_fn, creationflags=creationflags)
            _currentSubprocess = (p, args)
            if callable(out):
                t = Thread(target=redirect, args=(p.stdout, out))
                t.daemon = True # thread dies with the program
            if callable(err):
                t = Thread(target=redirect, args=(p.stderr, err))
                t.daemon = True # thread dies with the program
            if timeout is None or timeout == 0:
                retcode = waitOn(p)
                if get_os() == 'windows':
                    abort('Use of timeout not (yet) supported on Windows')
                retcode = _waitWithTimeout(p, args, timeout)
    except OSError as e:
        log('Error executing \'' + ' '.join(args) + '\': ' + str(e))
        if _opts.verbose:
            raise e
    except KeyboardInterrupt:
        _currentSubprocess = None

    if retcode and nonZeroIsFatal:
        if _opts.verbose:
            if _opts.very_verbose:
                raise subprocess.CalledProcessError(retcode, ' '.join(args))
                log('[exit code: ' + str(retcode)+ ']')

    return retcode

def exe_suffix(name):
    Gets the platform specific suffix for an executable
    if get_os() == 'windows':
        return name + '.exe'
    return name

def lib_suffix(name):
    Gets the platform specific suffix for a library
    os = get_os();
    if os == 'windows':
        return name + '.dll'
    if os == 'linux' or os == 'solaris':
        return name + '.so'
    if os == 'darwin':
        return name + '.dylib'
    return name

A JavaCompliance simplifies comparing Java compliance values extracted from a JDK version string.
class JavaCompliance:
    def __init__(self, ver):
        m = re.match('1\.(\d+).*', ver)
        assert m is not None, 'not a recognized version string: ' + ver
        self.value = int(

    def __str__ (self):
        return '1.' + str(self.value)

    def __cmp__ (self, other):
        if isinstance(other, types.StringType):
            other = JavaCompliance(other)

        return cmp(self.value, other.value)
A Java version as defined in JSR-56
class JavaVersion:
    def __init__(self, versionString):
        validChar = '[\x21-\x25\x27-\x29\x2c\x2f-\x5e\x60-\x7f]'
        separator = '[.\-_]'
        m = re.match(validChar + '+(' + separator + validChar + '+)*', versionString)
        assert m is not None, 'not a recognized version string: ' + versionString
        self.versionString = versionString; = [int(f) if f.isdigit() else f for f in re.split(separator, versionString)]
    def __str__(self):
        return self.versionString
    def __cmp__(self, other):
        return cmp(,

A JavaConfig object encapsulates info on how Java commands are run.
class JavaConfig:
    def __init__(self, opts):
        self.jdk = opts.java_home
        self.debug_port = opts.java_dbg_port
        self.jar   = exe_suffix(join(self.jdk, 'bin', 'jar'))  = exe_suffix(join(self.jdk, 'bin', 'java'))
        self.javac = exe_suffix(join(self.jdk, 'bin', 'javac'))
        self.javap = exe_suffix(join(self.jdk, 'bin', 'javap'))
        self.javadoc = exe_suffix(join(self.jdk, 'bin', 'javadoc'))
        self._bootclasspath = None

        if not exists(
            abort('Java launcher derived from JAVA_HOME does not exist: ' +

        def delAtAndSplit(s):
            return shlex.split(s.lstrip('@'))

        self.java_args = delAtAndSplit(_opts.java_args)
        self.java_args_pfx = sum(map(delAtAndSplit, _opts.java_args_pfx), [])
        self.java_args_sfx = sum(map(delAtAndSplit, _opts.java_args_sfx), [])

        # Prepend the -d64 VM option only if the java command supports it
            output = subprocess.check_output([, '-d64', '-version'], stderr=subprocess.STDOUT)
            self.java_args = ['-d64'] + self.java_args
        except subprocess.CalledProcessError as e:
                output = subprocess.check_output([, '-version'], stderr=subprocess.STDOUT)
            except subprocess.CalledProcessError as e:
                print e.output

        output = output.split()
        assert output[1] == 'version'
        self.version = JavaVersion(output[2].strip('"'))
        self.javaCompliance = JavaCompliance(self.version.versionString)

        if self.debug_port is not None:
            self.java_args += ['-Xdebug', '-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=' + str(self.debug_port)]

    def format_cmd(self, args):
        return [] + self.java_args_pfx + self.java_args + self.java_args_sfx + args

    def bootclasspath(self):
        if self._bootclasspath is None:
            tmpDir = tempfile.mkdtemp()
                src = join(tmpDir, '')
                with open(src, 'w') as fp:
                    print >> fp, """
public class bootclasspath {
    public static void main(String[] args) {
        String s = System.getProperty("sun.boot.class.path");
        if (s != null) {
                subprocess.check_call([self.javac, '-d', tmpDir, src])
                self._bootclasspath = subprocess.check_output([, '-cp', tmpDir, 'bootclasspath'])
        return self._bootclasspath

def check_get_env(key):
    Gets an environment variable, aborting with a useful message if it is not set.
    value = get_env(key)
    if value is None:
        abort('Required environment variable ' + key + ' must be set')
    return value

def get_env(key, default=None):
    Gets an environment variable.
    value = os.environ.get(key, default)
    return value

def log(msg=None):
    Write a message to the console.
    All script output goes through this method thus allowing a subclass
    to redirect it.
    if msg is None:
        print msg

def expand_project_in_class_path_arg(cpArg):
    cp = []
    for part in cpArg.split(os.pathsep):
        if part.startswith('@'):
            cp += classpath(part[1:]).split(os.pathsep)
    return os.pathsep.join(cp)

def expand_project_in_args(args):
    for i in range(len(args)):
        if args[i] == '-cp' or args[i] == '-classpath':
            if i + 1 < len(args):
                args[i + 1] = expand_project_in_class_path_arg(args[i + 1])

def gmake_cmd():
    for a in ['make', 'gmake', 'gnumake']:
            output = subprocess.check_output([a, '--version'])
            if 'GNU' in output:
                return a;
    abort('Could not find a GNU make executable on the current path.')

def expandvars_in_property(value):
        result = expandvars(value)
        if '$' in result or '%' in result:
            abort('Property contains an undefined environment variable: ' + value)
        return result

def abort(codeOrMessage):
    Aborts the program with a SystemExit exception.
    If 'codeOrMessage' is a plain integer, it specifies the system exit status;
    if it is None, the exit status is zero; if it has another type (such as a string),
    the object's value is printed and the exit status is one.

    #import traceback
    currentSubprocess = _currentSubprocess
    if currentSubprocess is not None:
        p, _ = currentSubprocess
        if get_os() == 'windows':
    raise SystemExit(codeOrMessage)

def download(path, urls, verbose=False):
    Attempts to downloads content for each URL in a list, stopping after the first successful download.
    If the content cannot be retrieved from any URL, the program is aborted. The downloaded content
    is written to the file indicated by 'path'.
    d = dirname(path)
    if d != '' and not exists(d):

    # Try it with the Java tool first since it can show a progress counter
    myDir = dirname(__file__)

    if not path.endswith(os.sep):
        javaSource = join(myDir, '')
        javaClass = join(myDir, 'URLConnectionDownload.class')
        if not exists(javaClass) or getmtime(javaClass) < getmtime(javaSource):
            subprocess.check_call([java().javac, '-d', myDir, javaSource])
        if run([java().java, '-cp', myDir, 'URLConnectionDownload', path] + urls, nonZeroIsFatal=False) == 0:

    def url_open(url):
        userAgent = 'Mozilla/5.0 (compatible)'
        headers = { 'User-Agent' : userAgent }
        req = urllib2.Request(url, headers=headers)
        return urllib2.urlopen(req)

    for url in urls:
            if (verbose):
                log('Downloading ' + url + ' to ' + path)
            if url.startswith('zip:') or url.startswith('jar:'):
                i = url.find('!/')
                if i == -1:
                    abort('Zip or jar URL does not contain "!/": ' + url)
                url, _, entry = url[len('zip:'):].partition('!/')
                with contextlib.closing(url_open(url)) as f:
                    data =
                    zipdata = StringIO.StringIO(

                zf = zipfile.ZipFile(zipdata, 'r')
                data =
                with open(path, 'wb') as f:
                with contextlib.closing(url_open(url)) as f:
                    data =
                if path.endswith(os.sep):
                    # Scrape directory listing for relative URLs
                    hrefs = re.findall(r' href="([^"]*)"', data)
                    if len(hrefs) != 0:
                        for href in hrefs:
                            if not '/' in href:
                                download(join(path, href), [url + href], verbose)
                        log('no locals hrefs scraped from ' + url)
                    with open(path, 'wb') as f:
        except IOError as e:
            log('Error reading from ' + url + ': ' + str(e))
        except zipfile.BadZipfile as e:
            log('Error in zip file downloaded from ' + url + ': ' + str(e))

    abort('Could not download to ' + path + ' from any of the following URLs:\n\n    ' +
              '\n    '.join(urls) + '\n\nPlease use a web browser to do the download manually')

def update_file(path, content):
    Updates a file with some given content if the content differs from what's in
    the file already. The return value indicates if the file was updated.
    existed = exists(path)
        old = None
        if existed:
            with open(path, 'rb') as f:
                old =

        if old == content:
            return False

        with open(path, 'wb') as f:

        log(('modified ' if existed else 'created ') + path)
        return True;
    except IOError as e:
        abort('Error while writing to ' + path + ': ' + str(e));

# Builtin commands

def build(args, parser=None):
    """compile the Java and C sources, linking the latter

    Compile all the Java source code using the appropriate compilers
    and linkers for the various source code types."""

    suppliedParser = parser is not None
    if not suppliedParser:
        parser = ArgumentParser(prog='mx build')

    javaCompliance = java().javaCompliance

    defaultEcjPath = join(_mainSuite.dir, 'mx', 'ecj.jar')

    parser = parser if parser is not None else ArgumentParser(prog='mx build')
    parser.add_argument('-f', action='store_true', dest='force', help='force build (disables timestamp checking)')
    parser.add_argument('-c', action='store_true', dest='clean', help='removes existing build output')
    parser.add_argument('--source', dest='compliance', help='Java compliance level', default=str(javaCompliance))
    parser.add_argument('--Wapi', action='store_true', dest='warnAPI', help='show warnings about using internal APIs')
    parser.add_argument('--projects', action='store', help='comma separated projects to build (omit to build all projects)')
    parser.add_argument('--only', action='store', help='comma separated projects to build, without checking their dependencies (omit to build all projects)')
    parser.add_argument('--no-java', action='store_false', dest='java', help='do not build Java projects')
    parser.add_argument('--no-native', action='store_false', dest='native', help='do not build native projects')
    parser.add_argument('--jdt', help='Eclipse installation or path to ecj.jar for using the Eclipse batch compiler (default: ' + defaultEcjPath + ')', default=defaultEcjPath, metavar='<path>')
    parser.add_argument('--jdt-warning-as-error', action='store_true', help='convert all Eclipse batch compiler warnings to errors')

    if suppliedParser:
        parser.add_argument('remainder', nargs=REMAINDER, metavar='...')

    args = parser.parse_args(args)
    jdtJar = None
    if args.jdt is not None:
        if args.jdt.endswith('.jar'):
            if not exists(jdtJar) and os.path.abspath(jdtJar) == os.path.abspath(defaultEcjPath):
                # Silently ignore JDT if default location is used but not ecj.jar exists there
                jdtJar = None
        elif isdir(args.jdt):
            plugins = join(args.jdt, 'plugins')
            choices = [f for f in os.listdir(plugins) if fnmatch.fnmatch(f, 'org.eclipse.jdt.core_*.jar')]
            if len(choices) != 0:
                jdtJar = join(plugins, sorted(choices, reverse=True)[0])

    built = set()

    projects = None
    if args.projects is not None:
        projects = args.projects.split(',')

    if args.only is not None:
        sortedProjects = [project(name) for name in args.only.split(',')]
        sortedProjects = sorted_deps(projects, includeAnnotationProcessors=True)
    for p in sortedProjects:
        if p.native:
            if args.native:
                log('Calling GNU make {0}...'.format(p.dir))

                if args.clean:
                    run([gmake_cmd(), 'clean'], cwd=p.dir)

                run([gmake_cmd()], cwd=p.dir)
            if not
            if exists(join(p.dir, 'plugin.xml')): # eclipse plugin project

        # skip building this Java project if its Java compliance level is "higher" than the configured JDK
        if javaCompliance < p.javaCompliance:
            log('Excluding {0} from build (Java compliance level {1} required)'.format(, p.javaCompliance))

        outputDir = p.output_dir()
        if exists(outputDir):
            if args.clean:
                log('Cleaning {0}...'.format(outputDir))

        cp = classpath(, includeSelf=True)
        sourceDirs = p.source_dirs()
        mustBuild = args.force
        if not mustBuild:
            for dep in p.all_deps([], False):
                if in built:
                    mustBuild = True

        jasminAvailable = None
        javafilelist = []
        for sourceDir in sourceDirs:
            for root, _, files in os.walk(sourceDir):
                javafiles = [join(root, name) for name in files if name.endswith('.java') and name != '']
                javafilelist += javafiles

                # Copy all non Java resources or assemble Jasmin files
                nonjavafilelist = [join(root, name) for name in files if not name.endswith('.java')]
                for src in nonjavafilelist:
                    if src.endswith('.jasm'):
                        className = None
                        with open(src) as f:
                            for line in f:
                                if line.startswith('.class '):
                                    className = line.split()[-1]

                        if className is not None:
                            jasminOutputDir = p.jasmin_output_dir()
                            classFile = join(jasminOutputDir, className.replace('/', os.sep) + '.class')
                            if exists(dirname(classFile)) and (not exists(classFile) or os.path.getmtime(classFile) < os.path.getmtime(src)):
                                if jasminAvailable is None:
                                        with open(os.devnull) as devnull:
                                  'jasmin', stdout=devnull, stderr=subprocess.STDOUT)
                                        jasminAvailable = True
                                    except OSError:
                                        jasminAvailable = False

                                if jasminAvailable:
                                    log('Assembling Jasmin file ' + src)
                                    run(['jasmin', '-d', jasminOutputDir, src])
                                    log('The jasmin executable could not be found - skipping ' + src)
                                    with file(classFile, 'a'):
                                        os.utime(classFile, None)

                            log('could not file .class directive in Jasmin source: ' + src)
                        dst = join(outputDir, src[len(sourceDir) + 1:])
                        if not exists(dirname(dst)):
                        if exists(dirname(dst)) and (not exists(dst) or os.path.getmtime(dst) < os.path.getmtime(src)):
                            shutil.copyfile(src, dst)

                if not mustBuild:
                    for javafile in javafiles:
                        classfile = outputDir + javafile[len(sourceDir):-len('java')] + 'class'
                        if not exists(classfile) or os.path.getmtime(javafile) > os.path.getmtime(classfile):
                            mustBuild = True

        if not mustBuild:
            log('[all class files for {0} are up to date - skipping]'.format(

        if len(javafilelist) == 0:
            log('[no Java sources for {0} - skipping]'.format(


        argfileName = join(p.dir, 'javafilelist.txt')
        argfile = open(argfileName, 'wb')

        javacArgs = []
        if java().debug_port is not None:
            javacArgs += ['-J-Xdebug', '-J-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=' + str(java().debug_port)]

        if hasattr(p, 'annotationProcessors') and len(p.annotationProcessors) > 0:
            processorPath = classpath(p.annotationProcessors, resolve=True)
            genDir = p.source_gen_dir();
            if exists(genDir):
            javacArgs += ['-processorpath', join(processorPath), '-s', genDir] 
            javacArgs += ['-proc:none']

        toBeDeleted = [argfileName]
            if jdtJar is None:
                log('Compiling Java sources for {0} with javac...'.format(
                javacCmd = [java().javac, '-g', '-J-Xmx1g', '-source', args.compliance, '-classpath', cp, '-d', outputDir] + javacArgs + ['@' +]
                if not args.warnAPI:
                log('Compiling Java sources for {0} with JDT...'.format(
                jdtArgs = [java().java, '-Xmx1g', '-jar', jdtJar,
                         '-' + args.compliance,
                         '-cp', cp, '-g', '-enableJavadoc',
                         '-d', outputDir] + javacArgs
                jdtProperties = join(p.dir, '.settings', 'org.eclipse.jdt.core.prefs')
                rootJdtProperties = join(p.suite.dir, 'mx', 'eclipse-settings', 'org.eclipse.jdt.core.prefs')
                if not exists(jdtProperties) or os.path.getmtime(jdtProperties) < os.path.getmtime(rootJdtProperties):
                    # Try to fix a missing properties file by running eclipseinit
                    eclipseinit([], buildProcessorJars=False)
                if not exists(jdtProperties):
                    log('JDT properties file {0} not found'.format(jdtProperties))
                    # convert all warnings to errors
                    if args.jdt_warning_as_error:
                        jdtPropertiesTmp = jdtProperties + '.tmp'
                        with open(jdtProperties) as fp:
                            content ='=warning', '=error')
                        with open(jdtPropertiesTmp, 'w') as fp:
                        jdtArgs += ['-properties', jdtPropertiesTmp]
                        jdtArgs += ['-properties', jdtProperties]
                jdtArgs.append('@' +
            for n in toBeDeleted:
    for dist in _dists.values():
        archive(['@' +])

    if suppliedParser:
        return args
    return None

def eclipseformat(args):
    """run the Eclipse Code Formatter on the Java sources

    The exit code 1 denotes that at least one file was modified."""

    parser = ArgumentParser(prog='mx eclipseformat')
    parser.add_argument('-e', '--eclipse-exe', help='location of the Eclipse executable')
    parser.add_argument('-C', '--no-backup', action='store_false', dest='backup', help='do not save backup of modified files')
    parser.add_argument('--projects', action='store', help='comma separated projects to process (omit to process all projects)')
    args = parser.parse_args(args)
    if args.eclipse_exe is None:
        args.eclipse_exe = os.environ.get('ECLIPSE_EXE')
    if args.eclipse_exe is None:
        abort('Could not find Eclipse executable. Use -e option or ensure ECLIPSE_EXE environment variable is set.')

    eclipseinit([], buildProcessorJars=False)

    # build list of projects to be processed
    projects = sorted_deps()
    if args.projects is not None:
        projects = [project(name) for name in args.projects.split(',')]

    class Batch:
        def __init__(self, settingsFile):
            self.path = settingsFile
            self.javafiles = list()
        def settings(self):
            with open(self.path) as fp:

    class FileInfo:
        def __init__(self, path):
            self.path = path
            with open(path) as fp:
                self.content =
            self.times = (os.path.getatime(path), os.path.getmtime(path))

        def update(self):
            with open(self.path) as fp:
                content =
                if self.content != content:
                    self.content = content
                    return True
            os.utime(self.path, self.times)
    modified = list()
    batches = dict() # all sources with the same formatting settings are formatted together
    for p in projects:
        if p.native:
        sourceDirs = p.source_dirs()
        batch = Batch(join(p.dir, '.settings', 'org.eclipse.jdt.core.prefs'))

        if not exists(batch.path):
            log('[no Eclipse Code Formatter preferences at {0} - skipping]'.format(batch.path))

        for sourceDir in sourceDirs:
            for root, _, files in os.walk(sourceDir):
                for f in [join(root, name) for name in files if name.endswith('.java')]:
        if len(batch.javafiles) == 0:
            log('[no Java sources in {0} - skipping]'.format(

        res = batches.setdefault(batch.settings(), batch)
        if res is not batch:
            res.javafiles = res.javafiles + batch.javafiles

    for batch in batches.itervalues():
        run([args.eclipse_exe, '-nosplash', '-application', 'org.eclipse.jdt.core.JavaCodeFormatter', '-config', batch.path] + [f.path for f in batch.javafiles])
        for fi in batch.javafiles:
            if fi.update():
    log('{0} files were modified'.format(len(modified)))
    if len(modified) != 0:
        if args.backup:
            backup = os.path.abspath('')
            arcbase = _mainSuite.dir
            zf = zipfile.ZipFile(backup, 'w', zipfile.ZIP_DEFLATED)
            for fi in modified:
                arcname = os.path.relpath(fi.path, arcbase).replace(os.sep, '/')
                zf.writestr(arcname, fi.content)
            log('Wrote backup of {0} modified files to {1}'.format(len(modified), backup))
        return 1
    return 0

def processorjars():
    projects = set()
    for p in sorted_deps():
        if _isAnnotationProcessorDependency(p):
    if len(projects) <= 0:
    pnames = [ for p in projects]
    build(['--projects', ",".join(pnames)])

def archive(args):
    """create jar files for projects and distributions"""
    parser = ArgumentParser(prog='mx archive');
    parser.add_argument('names', nargs=REMAINDER, metavar='[<project>|@<distribution>]...')
    args = parser.parse_args(args)
    for name in args.names:
        if name.startswith('@'):
            dname = name[1:]
            d = distribution(dname)
            fd, tmp = tempfile.mkstemp(suffix='', prefix=basename(d.path) + '.', dir=dirname(d.path))
            zf = zipfile.ZipFile(tmp, 'w')
            for p in sorted_deps(d.deps):
                outputDir = p.output_dir()
                for root, _, files in os.walk(outputDir):
                    for f in files:
                        relpath = root[len(outputDir) + 1:]
                        arcname = join(relpath, f).replace(os.sep, '/')
                        zf.write(join(root, f), arcname)
            # Atomic on Unix
            shutil.move(tmp, d.path)
            #print time.time(), 'move:', tmp, '->', d.path

            p = project(name)
            outputDir = p.output_dir()
            fd, tmp = tempfile.mkstemp(suffix='',, dir=p.dir)
            zf = zipfile.ZipFile(tmp, 'w')
            for root, _, files in os.walk(outputDir):
                for f in files:
                    relpath = root[len(outputDir) + 1:]
                    arcname = join(relpath, f).replace(os.sep, '/')
                    zf.write(join(root, f), arcname)
            # Atomic on Unix
            shutil.move(tmp, join(p.dir, + '.jar'))

def canonicalizeprojects(args):
    """process all project files to canonicalize the dependencies

    The exit code of this command reflects how many files were updated."""

    changedFiles = 0
    for s in suites():
        projectsFile = join(s.dir, 'mx', 'projects')
        if not exists(projectsFile):
        with open(projectsFile) as f:
            out = StringIO.StringIO()
            pattern = re.compile('project@([^@]+)@dependencies=.*')
            lineNo = 1
            for line in f:
                line = line.strip()
                m = pattern.match(line)
                if m is None:
                    out.write(line + '\n')
                    p = project(
                    for pkg in p.defined_java_packages():
                        if not pkg.startswith(
                            abort('package in {0} does not have prefix matching project name: {1}'.format(p, pkg))
                    ignoredDeps = set([name for name in p.deps if project(name, False) is not None])
                    for pkg in p.imported_java_packages():
                        for name in p.deps:
                            dep = project(name, False)
                            if dep is None:
                                if pkg in dep.defined_java_packages():
                                if pkg in dep.extended_java_packages():
                    if len(ignoredDeps) != 0:
                        candidates = set();
                        # Compute dependencies based on projects required by p
                        for d in sorted_deps():
                            if not d.defined_java_packages().isdisjoint(p.imported_java_packages()):
                        # Remove non-canonical candidates
                        for c in list(candidates):
                            candidates.difference_update(c.all_deps([], False, False))
                        candidates = [ for d in candidates]
                        abort('{0}:{1}: {2} does not use any packages defined in these projects: {3}\nComputed project dependencies: {4}'.format(
                            projectsFile, lineNo, p, ', '.join(ignoredDeps), ','.join(candidates)))
                    out.write('project@' + + '@dependencies=' + ','.join(p.canonical_deps()) + '\n')
                lineNo = lineNo + 1
            content = out.getvalue()
        if update_file(projectsFile, content):
            changedFiles += 1
    return changedFiles;

def checkstyle(args):
    """run Checkstyle on the Java sources

   Run Checkstyle over the Java sources. Any errors or warnings
   produced by Checkstyle result in a non-zero exit code."""

    for p in sorted_deps():
        if p.native:
        sourceDirs = p.source_dirs()
        dotCheckstyle = join(p.dir, '.checkstyle')

        if not exists(dotCheckstyle):

        for sourceDir in sourceDirs:
            javafilelist = []
            for root, _, files in os.walk(sourceDir):
                javafilelist += [join(root, name) for name in files if name.endswith('.java') and name != '']
            if len(javafilelist) == 0:
                log('[no Java sources in {0} - skipping]'.format(sourceDir))

            timestampFile = join(p.suite.dir, 'mx', 'checkstyle-timestamps', sourceDir[len(p.suite.dir) + 1:].replace(os.sep, '_') + '.timestamp')
            if not exists(dirname(timestampFile)):
            mustCheck = False
            if exists(timestampFile):
                timestamp = os.path.getmtime(timestampFile)
                for f in javafilelist:
                    if os.path.getmtime(f) > timestamp:
                        mustCheck = True
                mustCheck = True

            if not mustCheck:
                log('[all Java sources in {0} already checked - skipping]'.format(sourceDir))

            if exists(timestampFile):
                os.utime(timestampFile, None)
                file(timestampFile, 'a')

            dotCheckstyleXML = xml.dom.minidom.parse(dotCheckstyle)
            localCheckConfig = dotCheckstyleXML.getElementsByTagName('local-check-config')[0]
            configLocation = localCheckConfig.getAttribute('location')
            configType = localCheckConfig.getAttribute('type')
            if configType == 'project':
                # Eclipse plugin "Project Relative Configuration" format:
                #  '/<project_name>/<suffix>'
                if configLocation.startswith('/'):
                    name, _, suffix = configLocation.lstrip('/').partition('/')
                    config = join(project(name).dir, suffix)
                    config = join(p.dir, configLocation)
                log('[unknown Checkstyle configuration type "' + configType + '" in {0} - skipping]'.format(sourceDir))

            exclude = join(p.dir, '.checkstyle.exclude')

            if exists(exclude):
                with open(exclude) as f:
                    # Convert patterns to OS separators
                    patterns = [name.rstrip().replace('/', os.sep) for name in f.readlines()]
                def match(name):
                    for p in patterns:
                        if p in name:
                            if _opts.verbose:
                                log('excluding: ' + name)
                            return True
                    return False

                javafilelist = [name for name in javafilelist if not match(name)]

            auditfileName = join(p.dir, 'checkstyleOutput.txt')
            log('Running Checkstyle on {0} using {1}...'.format(sourceDir, config))


                # Checkstyle is unable to read the filenames to process from a file, and the
                # CreateProcess function on Windows limits the length of a command line to
                # 32,768 characters (
                # so calling Checkstyle must be done in batches.
                while len(javafilelist) != 0:
                    i = 0
                    size = 0
                    while i < len(javafilelist):
                        s = len(javafilelist[i]) + 1
                        if (size + s < 30000):
                            size += s
                            i += 1

                    batch = javafilelist[:i]
                    javafilelist = javafilelist[i:]
                        run_java(['-Xmx1g', '-jar', library('CHECKSTYLE').get_path(True), '-c', config, '-o', auditfileName] + batch)
                        if exists(auditfileName):
                            with open(auditfileName) as f:
                                warnings = [line.strip() for line in f if 'warning:' in line]
                                if len(warnings) != 0:
                                    map(log, warnings)
                                    return 1
                if exists(auditfileName):
    return 0

def clean(args, parser=None):
    """remove all class files, images, and executables

    Removes all files created by a build, including Java class files, executables, and
    generated images.

    suppliedParser = parser is not None

    parser = parser if suppliedParser else ArgumentParser(prog='mx build');
    parser.add_argument('--no-native', action='store_false', dest='native', help='do not clean native projects')
    parser.add_argument('--no-java', action='store_false', dest='java', help='do not clean Java projects')

    args = parser.parse_args(args)

    for p in projects():
        if p.native:
            if args.native:
                run([gmake_cmd(), '-C', p.dir, 'clean'])
                genDir = p.source_gen_dir();
                if genDir != '' and exists(genDir):
                    log('Clearing {0}...'.format(genDir))
                    for f in os.listdir(genDir):
                        shutil.rmtree(join(genDir, f))
                outputDir = p.output_dir()
                if outputDir != '' and exists(outputDir):
                    log('Removing {0}...'.format(outputDir))

    if suppliedParser:
        return args

def about(args):
    """show the 'man page' for mx"""
    print __doc__

def help_(args):
    """show help for a given command

With no arguments, print a list of commands and short help for each command.

Given a command name, print help for that command."""
    if len(args) == 0:

    name = args[0]
    if not commands.has_key(name):
        hits = [c for c in commands.iterkeys() if c.startswith(name)]
        if len(hits) == 1:
            name = hits[0]
        elif len(hits) == 0:
            abort('mx: unknown command \'{0}\'\n{1}use "mx help" for more options'.format(name, _format_commands()))
            abort('mx: command \'{0}\' is ambiguous\n    {1}'.format(name, ' '.join(hits)))

    value = commands[name]
    (func, usage) = value[:2]
    doc = func.__doc__
    if len(value) > 2:
        docArgs = value[2:]
        fmtArgs = []
        for d in docArgs:
            if isinstance(d, Callable):
                fmtArgs += [d()]
                fmtArgs += [str(d)]
        doc = doc.format(*fmtArgs)
    print 'mx {0} {1}\n\n{2}\n'.format(name, usage, doc)

def projectgraph(args, suite=None):
    """create dot graph for project structure ("mx projectgraph | dot -Tpdf -oprojects.pdf")"""

    print 'digraph projects {'
    print 'rankdir=BT;'
    print 'node [shape=rect];'
    for p in projects():
        for dep in p.canonical_deps():
            print '"' + + '"->"' + dep + '"'
    print '}'

def _source_locator_memento(deps):
    slm = XMLDoc()'sourceLookupDirector')'sourceContainers', {'duplicates' : 'false'})

    # Every Java program depends on the JRE
    memento = XMLDoc().element('classpathContainer', {'path' : 'org.eclipse.jdt.launching.JRE_CONTAINER'}).xml()
    slm.element('classpathContainer', {'memento' : memento, 'typeId':'org.eclipse.jdt.launching.sourceContainer.classpathContainer'})

    for dep in deps:
        if dep.isLibrary():
            if hasattr(dep, 'eclipse.container'):
                memento = XMLDoc().element('classpathContainer', {'path' : getattr(dep, 'eclipse.container')}).xml()
                slm.element('classpathContainer', {'memento' : memento, 'typeId':'org.eclipse.jdt.launching.sourceContainer.classpathContainer'})
            memento = XMLDoc().element('javaProject', {'name' :}).xml()
            slm.element('container', {'memento' : memento, 'typeId':'org.eclipse.jdt.launching.sourceContainer.javaProject'})

    return slm

def make_eclipse_attach(hostname, port, name=None, deps=[]):
    Creates an Eclipse launch configuration file for attaching to a Java process.
    slm = _source_locator_memento(deps)
    launch = XMLDoc()'launchConfiguration', {'type' : 'org.eclipse.jdt.launching.remoteJavaApplication'})
    launch.element('stringAttribute', {'key' : 'org.eclipse.debug.core.source_locator_id', 'value' : 'org.eclipse.jdt.launching.sourceLocator.JavaSourceLookupDirector'})
    launch.element('stringAttribute', {'key' : 'org.eclipse.debug.core.source_locator_memento', 'value' : '%s'})
    launch.element('booleanAttribute', {'key' : 'org.eclipse.jdt.launching.ALLOW_TERMINATE', 'value' : 'true'})'mapAttribute', {'key' : 'org.eclipse.jdt.launching.CONNECT_MAP'})
    launch.element('mapEntry', {'key' : 'hostname', 'value' : hostname})
    launch.element('mapEntry', {'key' : 'port', 'value' : port})
    launch.element('stringAttribute', {'key' : 'org.eclipse.jdt.launching.PROJECT_ATTR', 'value' : ''})
    launch.element('stringAttribute', {'key' : 'org.eclipse.jdt.launching.VM_CONNECTOR_ID', 'value' : 'org.eclipse.jdt.launching.socketAttachConnector'})
    launch = launch.xml(newl='\n') % slm.xml(escape=True)

    if name is None:
        name = 'attach-' + hostname + '-' + port
    eclipseLaunches = join('mx', 'eclipse-launches')
    if not exists(eclipseLaunches):
    return update_file(join(eclipseLaunches, name + '.launch'), launch)

def make_eclipse_launch(javaArgs, jre, name=None, deps=[]):
    Creates an Eclipse launch configuration file for running/debugging a Java command.
    mainClass = None
    vmArgs = []
    appArgs = []
    cp = None
    argsCopy = list(reversed(javaArgs))
    while len(argsCopy) != 0:
        a = argsCopy.pop()
        if a == '-jar':
            mainClass = '-jar'
            appArgs = list(reversed(argsCopy))
        if a == '-cp' or a == '-classpath':
            assert len(argsCopy) != 0
            cp = argsCopy.pop()
        elif a.startswith('-'):
            mainClass = a
            appArgs = list(reversed(argsCopy))

    if mainClass is None:
        log('Cannot create Eclipse launch configuration without main class or jar file: java ' + ' '.join(javaArgs))
        return False

    if name is None:
        if mainClass == '-jar':
            name = basename(appArgs[0])
            if len(appArgs) > 1 and not appArgs[1].startswith('-'):
                name = name + '_' + appArgs[1]
            name = mainClass
        name = time.strftime('%Y-%m-%d-%H%M%S_' + name)

    if cp is not None:
        for e in cp.split(os.pathsep):
            for s in suites():
                deps += [p for p in s.projects if e == p.output_dir()]
                deps += [l for l in s.libs if e == l.get_path(False)]

    slm = _source_locator_memento(deps)

    launch = XMLDoc()'launchConfiguration', {'type' : 'org.eclipse.jdt.launching.localJavaApplication'})
    launch.element('stringAttribute', {'key' : 'org.eclipse.debug.core.source_locator_id', 'value' : 'org.eclipse.jdt.launching.sourceLocator.JavaSourceLookupDirector'})
    launch.element('stringAttribute', {'key' : 'org.eclipse.debug.core.source_locator_memento', 'value' : '%s'})
    launch.element('stringAttribute', {'key' : 'org.eclipse.jdt.launching.JRE_CONTAINER', 'value' : 'org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/' + jre})
    launch.element('stringAttribute', {'key' : 'org.eclipse.jdt.launching.MAIN_TYPE', 'value' : mainClass})
    launch.element('stringAttribute', {'key' : 'org.eclipse.jdt.launching.PROGRAM_ARGUMENTS', 'value' : ' '.join(appArgs)})
    launch.element('stringAttribute', {'key' : 'org.eclipse.jdt.launching.PROJECT_ATTR', 'value' : ''})
    launch.element('stringAttribute', {'key' : 'org.eclipse.jdt.launching.VM_ARGUMENTS', 'value' : ' '.join(vmArgs)})
    launch = launch.xml(newl='\n') % slm.xml(escape=True)

    eclipseLaunches = join('mx', 'eclipse-launches')
    if not exists(eclipseLaunches):
    return update_file(join(eclipseLaunches, name + '.launch'), launch)

def eclipseinit(args, suite=None, buildProcessorJars=True):
    """(re)generate Eclipse project configurations"""

    if suite is None:
        suite = _mainSuite
    if buildProcessorJars:

    projToDist = dict()
    for dist in _dists.values():
        distDeps = sorted_deps(dist.deps)
        for p in distDeps:
            projToDist[] = (dist, [ for dep in distDeps])

    for p in projects():
        if p.native:

        if not exists(p.dir):

        out = XMLDoc()'classpath')

        for src in p.srcDirs:
            srcDir = join(p.dir, src)
            if not exists(srcDir):
            out.element('classpathentry', {'kind' : 'src', 'path' : src})

        if hasattr(p, 'annotationProcessors') and len(p.annotationProcessors) > 0:
            genDir = p.source_gen_dir();
            if not exists(genDir):
            out.element('classpathentry', {'kind' : 'src', 'path' : 'src_gen'})

        # Every Java program depends on the JRE
        out.element('classpathentry', {'kind' : 'con', 'path' : 'org.eclipse.jdt.launching.JRE_CONTAINER'})

        if exists(join(p.dir, 'plugin.xml')): # eclipse plugin project
            out.element('classpathentry', {'kind' : 'con', 'path' : 'org.eclipse.pde.core.requiredPlugins'})

        for dep in p.all_deps([], True):
            if dep == p:

            if dep.isLibrary():
                if hasattr(dep, 'eclipse.container'):
                    out.element('classpathentry', {'exported' : 'true', 'kind' : 'con', 'path' : getattr(dep, 'eclipse.container')})
                elif hasattr(dep, 'eclipse.project'):
                    out.element('classpathentry', {'combineaccessrules' : 'false', 'exported' : 'true', 'kind' : 'src', 'path' : '/' + getattr(dep, 'eclipse.project')})
                    path = dep.path
                    if dep.mustExist:
                        if not isabs(path):
                            # Relative paths for "lib" class path entries have various semantics depending on the Eclipse
                            # version being used (e.g. see so it's
                            # safest to simply use absolute paths.
                            path = join(suite.dir, path)

                        attributes = {'exported' : 'true', 'kind' : 'lib', 'path' : path}

                        sourcePath = dep.get_source_path(resolve=True)
                        if sourcePath is not None:
                            attributes['sourcepath'] = sourcePath
                        out.element('classpathentry', attributes)
                out.element('classpathentry', {'combineaccessrules' : 'false', 'exported' : 'true', 'kind' : 'src', 'path' : '/' +})

        out.element('classpathentry', {'kind' : 'output', 'path' : getattr(p, 'eclipse.output', 'bin')})
        update_file(join(p.dir, '.classpath'), out.xml(indent='\t', newl='\n'))

        csConfig = join(project(p.checkstyleProj).dir, '.checkstyle_checks.xml')
        if exists(csConfig):
            out = XMLDoc()

            dotCheckstyle = join(p.dir, ".checkstyle")
            checkstyleConfigPath = '/' + p.checkstyleProj + '/.checkstyle_checks.xml'
  'fileset-config', {'file-format-version' : '1.2.0', 'simple-config' : 'true'})
  'local-check-config', {'name' : 'Checks', 'location' : checkstyleConfigPath, 'type' : 'project', 'description' : ''})
            out.element('additional-data', {'name' : 'protect-config-file', 'value' : 'false'})
  'fileset', {'name' : 'all', 'enabled' : 'true', 'check-config-name' : 'Checks', 'local' : 'true'})
            out.element('file-match-pattern', {'match-pattern' : '.', 'include-pattern' : 'true'})
  'filter', {'name' : 'all', 'enabled' : 'true', 'check-config-name' : 'Checks', 'local' : 'true'})
            out.element('filter-data', {'value' : 'java'})

            exclude = join(p.dir, '.checkstyle.exclude')
            if exists(exclude):
      'filter', {'name' : 'FilesFromPackage', 'enabled' : 'true'})
                with open(exclude) as f:
                    for line in f:
                        if not line.startswith('#'):
                            line = line.strip()
                            exclDir = join(p.dir, line)
                            assert isdir(exclDir), 'excluded source directory listed in ' + exclude + ' does not exist or is not a directory: ' + exclDir
                        out.element('filter-data', {'value' : line})

            update_file(dotCheckstyle, out.xml(indent='  ', newl='\n'))

        out = XMLDoc()'projectDescription')
        out.element('comment', data='')
        out.element('projects', data='')'buildSpec')'buildCommand')
        out.element('name', data='org.eclipse.jdt.core.javabuilder')
        out.element('arguments', data='')
        if exists(csConfig):
            out.element('name', data='net.sf.eclipsecs.core.CheckstyleBuilder')
            out.element('arguments', data='')
        if exists(join(p.dir, 'plugin.xml')): # eclipse plugin project
            for buildCommand in ['org.eclipse.pde.ManifestBuilder', 'org.eclipse.pde.SchemaBuilder']:
                out.element('name', data=buildCommand)
                out.element('arguments', data='')

        if _isAnnotationProcessorDependency(p):
            _genEclipseBuilder(out, p, 'Jar.launch', 'archive ' +, refresh = False, async = False)
            _genEclipseBuilder(out, p, 'Refresh.launch', '', refresh = True, async = True)
        if projToDist.has_key(
            dist, distDeps = projToDist[]
            _genEclipseBuilder(out, p, 'Create' + + 'Dist.launch', 'archive @' +, refresh=False, async=True)
        out.element('nature', data='org.eclipse.jdt.core.javanature')
        if exists(csConfig):
            out.element('nature', data='net.sf.eclipsecs.core.CheckstyleNature')
        if exists(join(p.dir, 'plugin.xml')): # eclipse plugin project
            out.element('nature', data='org.eclipse.pde.PluginNature')
        update_file(join(p.dir, '.project'), out.xml(indent='\t', newl='\n'))

        settingsDir = join(p.dir, ".settings")
        if not exists(settingsDir):

        eclipseSettingsDir = join(suite.dir, 'mx', 'eclipse-settings')
        if exists(eclipseSettingsDir):
            for name in os.listdir(eclipseSettingsDir):
                if name == "org.eclipse.jdt.apt.core.prefs" and not (hasattr(p, 'annotationProcessors') and len(p.annotationProcessors) > 0):
                path = join(eclipseSettingsDir, name)
                if isfile(path):
                    with open(join(eclipseSettingsDir, name)) as f:
                        content =
                    content = content.replace('${javaCompliance}', str(p.javaCompliance))
                    if hasattr(p, 'annotationProcessors') and len(p.annotationProcessors) > 0:
                        content = content.replace('org.eclipse.jdt.core.compiler.processAnnotations=disabled', 'org.eclipse.jdt.core.compiler.processAnnotations=enabled')
                    update_file(join(settingsDir, name), content)
        if hasattr(p, 'annotationProcessors') and len(p.annotationProcessors) > 0:
            out = XMLDoc()
            out.element('factorypathentry', {'kind' : 'PLUGIN', 'id' : '', 'enabled' : 'true', 'runInBatchMode' : 'false'})
            for ap in p.annotationProcessors:
                apProject = project(ap)
                for dep in apProject.all_deps([], True):
                    if dep.isLibrary():
                        if not hasattr(dep, 'eclipse.container') and not hasattr(dep, 'eclipse.project'):
                            if dep.mustExist:
                                path = dep.get_path(resolve=True)
                                if not isabs(path):
                                    # Relative paths for "lib" class path entries have various semantics depending on the Eclipse
                                    # version being used (e.g. see so it's
                                    # safest to simply use absolute paths.
                                    path = join(suite.dir, path)
                                out.element('factorypathentry', {'kind' : 'EXTJAR', 'id' : path, 'enabled' : 'true', 'runInBatchMode' : 'false'})
                        out.element('factorypathentry', {'kind' : 'WKSPJAR', 'id' : '/' + + '/' + + '.jar', 'enabled' : 'true', 'runInBatchMode' : 'false'})
            update_file(join(p.dir, '.factorypath'), out.xml(indent='\t', newl='\n'))

    make_eclipse_attach('localhost', '8000', deps=projects())

def _isAnnotationProcessorDependency(p):
    Determines if a given project is part of an annotation processor.
    processors = set()
    for otherProject in projects():
        if hasattr(otherProject, 'annotationProcessors') and len(otherProject.annotationProcessors) > 0:
            for processorName in otherProject.annotationProcessors:
                processors.add(project(processorName, fatalIfMissing=True))
    if p in processors:
        return True
    for otherProject in processors:
        deps = otherProject.all_deps([], True)
        if p in deps:
            return True
    return False

def _genEclipseBuilder(dotProjectDoc, p, name, mxCommand, refresh=True, async=False, logToConsole=False):
    launchOut = XMLDoc();
    consoleOn = 'true' if logToConsole else 'false''launchConfiguration', {'type' : 'org.eclipse.ui.externaltools.ProgramBuilderLaunchConfigurationType'})'mapAttribute', {'key' : 'org.eclipse.debug.core.environmentVariables'})
    launchOut.element('mapEntry', {'key' : 'JAVA_HOME',	'value' : java().jdk})
    if refresh:
        launchOut.element('stringAttribute',  {'key' : 'org.eclipse.debug.core.ATTR_REFRESH_SCOPE',            'value': '${project}'})
    launchOut.element('booleanAttribute', {'key' : 'org.eclipse.debug.ui.ATTR_CONSOLE_OUTPUT_ON',          'value': consoleOn})
    launchOut.element('booleanAttribute', {'key' : 'org.eclipse.debug.core.capture_output',                'value': consoleOn})
    if async:
        launchOut.element('booleanAttribute', {'key' : 'org.eclipse.debug.ui.ATTR_LAUNCH_IN_BACKGROUND',       'value': 'true'})
    baseDir = dirname(dirname(os.path.abspath(__file__)))
    cmd = ''
    if get_os() == 'windows':
        cmd = 'mx.cmd'
    launchOut.element('stringAttribute',  {'key' : 'org.eclipse.ui.externaltools.ATTR_LOCATION',           'value': join(baseDir, cmd) })
    launchOut.element('stringAttribute',  {'key' : 'org.eclipse.ui.externaltools.ATTR_RUN_BUILD_KINDS',    'value': 'auto,full,incremental'})
    launchOut.element('stringAttribute',  {'key' : 'org.eclipse.ui.externaltools.ATTR_TOOL_ARGUMENTS',     'value': mxCommand})
    launchOut.element('booleanAttribute', {'key' : 'org.eclipse.ui.externaltools.ATTR_TRIGGERS_CONFIGURED','value': 'true'})
    launchOut.element('stringAttribute',  {'key' : 'org.eclipse.ui.externaltools.ATTR_WORKING_DIRECTORY',  'value': baseDir})
    externalToolDir = join(p.dir, '.externalToolBuilders')
    if not exists(externalToolDir):
    update_file(join(externalToolDir, name), launchOut.xml(indent='\t', newl='\n'))'buildCommand')
    dotProjectDoc.element('name', data='org.eclipse.ui.externaltools.ExternalToolBuilder')
    dotProjectDoc.element('triggers', data='auto,full,incremental,')'arguments')'dictionary')
    dotProjectDoc.element('key', data = 'LaunchConfigHandle')
    dotProjectDoc.element('value', data = '<project>/.externalToolBuilders/' + name)
    dotProjectDoc.element('key', data = 'incclean')
    dotProjectDoc.element('value', data = 'true')
def netbeansinit(args, suite=None):
    """(re)generate NetBeans project configurations"""

    if suite is None:
        suite = _mainSuite

    def println(out, obj):
        out.write(str(obj) + '\n')

    updated = False
    for p in projects():
        if p.native:

        if exists(join(p.dir, 'plugin.xml')): # eclipse plugin project

        if not exists(join(p.dir, 'nbproject')):
            os.makedirs(join(p.dir, 'nbproject'))

        out = XMLDoc()'project', {'name' :, 'default' : 'default', 'basedir' : '.'})
        out.element('description', data='Builds, tests, and runs the project ' + + '.')
        out.element('import', {'file' : 'nbproject/build-impl.xml'})'target', {'name' : '-post-compile'})'exec', { 'executable' : sys.executable})
        out.element('env', {'key' : 'JAVA_HOME', 'value' : java().jdk})
        out.element('arg', {'value' : os.path.abspath(__file__)})
        out.element('arg', {'value' : 'archive'})
        out.element('arg', {'value' : '@GRAAL'})
        updated = update_file(join(p.dir, 'build.xml'), out.xml(indent='\t', newl='\n')) or updated

        out = XMLDoc()'project', {'xmlns' : ''})
        out.element('type', data='')'configuration')'data', {'xmlns' : ''})
        out.element('explicit-platform', {'explicit-source-supported' : 'true'})'source-roots')
        out.element('root', {'id' : 'src.dir'})
        if hasattr(p, 'annotationProcessors') and len(p.annotationProcessors) > 0:
            out.element('root', {'id' : 'src.ap-source-output.dir'})

        firstDep = True
        for dep in p.all_deps([], True):
            if dep == p:

            if not dep.isLibrary():
                n ='.', '_')
                if firstDep:
          'references', {'xmlns' : ''})
                    firstDep = False

                out.element('foreign-project', data=n)
                out.element('artifact-type', data='jar')
                out.element('script', data='build.xml')
                out.element('target', data='jar')
                out.element('clean-target', data='clean')
                out.element('id', data='jar')

        if not firstDep:

        updated = update_file(join(p.dir, 'nbproject', 'project.xml'), out.xml(indent='    ', newl='\n')) or updated

        out = StringIO.StringIO()
        jdkPlatform = 'JDK_' + str(java().version)

        annotationProcessorEnabled = "false"
        annotationProcessorReferences = ""
        annotationProcessorSrcFolder = ""
        if hasattr(p, 'annotationProcessors') and len(p.annotationProcessors) > 0:
            annotationProcessorEnabled = "true"
            annotationProcessorSrcFolder = "src.ap-source-output.dir=${build.generated.sources.dir}/ap-source-output"

        content = """
annotation.processing.enabled=""" + annotationProcessorEnabled + """""" + annotationProcessorEnabled + """
application.title=""" + + """
# This directory is removed when the project is cleaned:
# Only compile against the classpath explicitly listed here:
# Uncomment to specify the preferred debugger connection transport:
# This directory is removed when the project is cleaned:
dist.jar=${dist.dir}/""" + + """.jar
# Space-separated list of extra javac options
platforms.""" + jdkPlatform + """.home=""" + java().jdk + """""" + jdkPlatform + """
# Space-separated list of JVM arguments used when running the project
# (you may also define separate properties like instead of -Dname=value
# or to set system properties for unit tests):
""" + annotationProcessorSrcFolder + """
source.encoding=UTF-8""".replace(':', os.pathsep).replace('/', os.sep)
        print >> out, content

        mainSrc = True
        for src in p.srcDirs:
            srcDir = join(p.dir, src)
            if not exists(srcDir):
            ref = 'file.reference.' + + '-' + src
            print >> out, ref + '=' + src
            if mainSrc:
                print >> out, 'src.dir=${' + ref + '}'
                mainSrc = False
                print >> out, 'src.' + src + '.dir=${' + ref + '}'

        javacClasspath = []
        deps = p.all_deps([], True)
        annotationProcessorOnlyDeps = []
        if hasattr(p, 'annotationProcessors') and len(p.annotationProcessors) > 0:
            for ap in p.annotationProcessors:
                apProject = project(ap)
                if not apProject in deps:
        annotationProcessorReferences = [];
        for dep in deps:
            if dep == p:

            if dep.isLibrary():
                if not dep.mustExist:
                path = dep.get_path(resolve=True)
                if os.sep == '\\':
                    path = path.replace('\\', '\\\\')
                ref = 'file.reference.' + + '-bin'
                print >> out, ref + '=' + path

                n ='.', '_')
                relDepPath = os.path.relpath(dep.dir, p.dir).replace(os.sep, '/')
                ref = 'reference.' + n + '.jar'
                print >> out, 'project.' + n + '=' + relDepPath
                print >> out, ref + '=${project.' + n + '}/dist/' + + '.jar'

            if not dep in annotationProcessorOnlyDeps:
                javacClasspath.append('${' + ref + '}')
                annotationProcessorReferences.append('${' + ref + '}')
                annotationProcessorReferences +=  ":\\\n    ${" + ref + "}"

        print >> out, 'javac.classpath=\\\n    ' + (os.pathsep + '\\\n    ').join(javacClasspath)
        print >> out, 'javac.test.processorpath=${javac.test.classpath}\\\n    ' + (os.pathsep + '\\\n    ').join(annotationProcessorReferences)
        print >> out, 'javac.processorpath=${javac.classpath}\\\n    ' + (os.pathsep + '\\\n    ').join(annotationProcessorReferences)
        updated = update_file(join(p.dir, 'nbproject', ''), out.getvalue()) or updated

    if updated:
        log('If using NetBeans:')
        log('  1. Ensure that a platform named "JDK_' + str(java().version) + '" is defined (Tools -> Java Platforms)')
        log('  2. Open/create a Project Group for the directory containing the projects (File -> Project Group -> New Group... -> Folder of Projects)')

def ideclean(args, suite=None):
    """remove all Eclipse and NetBeans project configurations"""
    def rm(path):
        if exists(path):

    for p in projects():
        if p.native:

        shutil.rmtree(join(p.dir, '.settings'), ignore_errors=True)
        shutil.rmtree(join(p.dir, '.externalToolBuilders'), ignore_errors=True)
        shutil.rmtree(join(p.dir, 'nbproject'), ignore_errors=True)
        rm(join(p.dir, '.classpath'))
        rm(join(p.dir, '.project'))
        rm(join(p.dir, 'build.xml'))
        rm(join(p.dir, 'eclipse-build.xml'))
            rm(join(p.dir, + '.jar'))
            log("Error removing {0}".format( + '.jar'))

def ideinit(args, suite=None):
    """(re)generate Eclipse and NetBeans project configurations"""
    eclipseinit(args, suite)
    netbeansinit(args, suite)

def fsckprojects(args):
    """find directories corresponding to deleted Java projects and delete them"""
    for suite in suites():
        projectDirs = [p.dir for p in suite.projects]
        for root, dirnames, files in os.walk(suite.dir):
            currentDir = join(suite.dir, root)
            if currentDir in projectDirs:
                # don't traverse subdirs of an existing project
                dirnames[:] = []
                projectConfigFiles = frozenset(['.classpath', 'nbproject'])
                indicators = projectConfigFiles.intersection(files)
                if len(indicators) != 0:
                    if not sys.stdout.isatty() or raw_input(currentDir + ' looks like a removed project -- delete it? [yn]: ') == 'y':
                        log('Deleted ' + currentDir)

def javadoc(args, parser=None, docDir='javadoc', includeDeps=True):
    """generate javadoc for some/all Java projects"""

    parser = ArgumentParser(prog='mx javadoc') if parser is None else parser
    parser.add_argument('-d', '--base', action='store', help='base directory for output')
    parser.add_argument('--unified', action='store_true', help='put javadoc in a single directory instead of one per project')
    parser.add_argument('--force', action='store_true', help='(re)generate javadoc even if package-list file exists')
    parser.add_argument('--projects', action='store', help='comma separated projects to process (omit to process all projects)')
    parser.add_argument('--Wapi', action='store_true', dest='warnAPI', help='show warnings about using internal APIs')
    parser.add_argument('--argfile', action='store', help='name of file containing extra javadoc options')
    parser.add_argument('--arg', action='append', dest='extra_args', help='extra Javadoc arguments (e.g. --arg @-use)', metavar='@<arg>', default=[])
    parser.add_argument('-m', '--memory', action='store', help='-Xmx value to pass to underlying JVM')
    parser.add_argument('--packages', action='store', help='comma separated packages to process (omit to process all packages)')
    parser.add_argument('--exclude-packages', action='store', help='comma separated packages to exclude')

    args = parser.parse_args(args)

    # build list of projects to be processed
    candidates = sorted_deps()
    if args.projects is not None:
        candidates = [project(name) for name in args.projects.split(',')]

    # optionally restrict packages within a project
    packages = []
    if args.packages is not None:
        packages = [name for name in args.packages.split(',')]

    exclude_packages = []
    if args.exclude_packages is not None:
        exclude_packages = [name for name in args.exclude_packages.split(',')]

    def outDir(p):
        if args.base is None:
            return join(p.dir, docDir)
        return join(args.base,, docDir)

    def check_package_list(p):
        return not exists(join(outDir(p), 'package-list'))

    def assess_candidate(p, projects):
        if p in projects:
            return False
        if args.force or args.unified or check_package_list(p):
            return True
        return False

    projects = []
    for p in candidates:
        if not p.native:
            if includeDeps:
                deps = p.all_deps([], includeLibs=False, includeSelf=False)
                for d in deps:
                    assess_candidate(d, projects)
            if not assess_candidate(p, projects):
                log('[package-list file exists - skipping {0}]'.format(

    def find_packages(sourceDirs, pkgs=set()):
        for sourceDir in sourceDirs:
            for root, _, files in os.walk(sourceDir):
                if len([name for name in files if name.endswith('.java')]) != 0:
                    pkg = root[len(sourceDir) + 1:].replace(os.sep,'.')
                    if len(packages) == 0 or pkg in packages:
                        if len(exclude_packages) == 0 or not pkg in exclude_packages:
        return pkgs

    extraArgs = [a.lstrip('@') for a in args.extra_args]
    if args.argfile is not None:
        extraArgs += ['@' + args.argfile]
    memory = '2g'
    if args.memory is not None:
        memory = args.memory
    memory = '-J-Xmx' + memory

    if not args.unified:
        for p in projects:
            # The project must be built to ensure javadoc can find class files for all referenced classes
            build(['--no-native', '--projects',])
            pkgs = find_packages(p.source_dirs(), set())
            deps = p.all_deps([], includeLibs=False, includeSelf=False)
            links = ['-link', '' + str(p.javaCompliance.value) + '/docs/api/']
            out = outDir(p)
            for d in deps:
                depOut = outDir(d)
                links.append(os.path.relpath(depOut, out))
            cp = classpath(, includeSelf=True)
            sp = os.pathsep.join(p.source_dirs())
            overviewFile = join(p.dir, 'overview.html')
            delOverviewFile = False
            if not exists(overviewFile):
                with open(overviewFile, 'w') as fp:
                    print >> fp, '<html><body>Documentation for the <code>' + + '</code> project.</body></html>'
                delOverviewFile = True
            nowarnAPI = []
            if not args.warnAPI:
                log('Generating {2} for {0} in {1}'.format(, out, docDir))
                run([java().javadoc, memory,
                     '-windowtitle', + ' javadoc',
                     '-classpath', cp,
                     '-d', out,
                     '-overview', overviewFile,
                     '-sourcepath', sp] +
                     links +
                     extraArgs +
                     nowarnAPI +
                log('Generated {2} for {0} in {1}'.format(, out, docDir))
                if delOverviewFile:
        # The projects must be built to ensure javadoc can find class files for all referenced classes
        pkgs = set()
        sp = []
        names = []
        for p in projects:
            find_packages(p.source_dirs(), pkgs)
            sp += p.source_dirs()

        links = ['-link', '' + str(_java.javaCompliance.value) + '/docs/api/']
        out = join(_mainSuite.dir, docDir)
        if args.base is not None:
            out = join(args.base, docDir)
        cp = classpath()
        sp = os.pathsep.join(sp)
        nowarnAPI = []
        if not args.warnAPI:
        log('Generating {2} for {0} in {1}'.format(', '.join(names), out, docDir))
        run([java().javadoc, memory,
             '-classpath', cp,
             '-d', out,
             '-sourcepath', sp] +
             links +
             extraArgs +
             nowarnAPI +
        log('Generated {2} for {0} in {1}'.format(', '.join(names), out, docDir))

class Chunk:
    def __init__(self, content, ldelim, rdelim=None):
        lindex = content.find(ldelim)
        if rdelim is not None:
            rindex = content.find(rdelim)
            rindex = lindex + len(ldelim)
        self.ldelim = ldelim
        self.rdelim = rdelim
        if lindex != -1 and rindex != -1 and rindex > lindex:
            self.text = content[lindex + len(ldelim):rindex]
            self.text = None

    def replace(self, content, repl):
        lindex = content.find(self.ldelim)
        if self.rdelim is not None:
            rindex = content.find(self.rdelim)
            rdelimLen = len(self.rdelim)
            rindex = lindex + len(self.ldelim)
            rdelimLen = 0
        old = content[lindex:rindex + rdelimLen]
        return content.replace(old, repl)
# Post-process an overview-summary.html file to move the
# complete overview to the top of the page
def _fix_overview_summary(path, topLink):
    Processes an "overview-summary.html" generated by javadoc to put the complete
    summary text above the Packages table.

    # This uses scraping and so will break if the relevant content produced by javadoc changes in any way!
    with open(path) as fp:
        content =

    chunk1 = Chunk(content, """<div class="header">
<div class="subTitle">
<div class="block">""", """</div>
<p>See: <a href="#overview_description">Description</a></p>

    chunk2 = Chunk(content, """<div class="footer"><a name="overview_description">
<!--   -->
<div class="subTitle">
<div class="block">""", """</div>
<!-- ======= START OF BOTTOM NAVBAR ====== -->""")

    assert chunk1.text, 'Could not find header section in ' + path
    assert chunk2.text, 'Could not find footer section in ' + path

    content = chunk1.replace(content, '<div class="header"><div class="subTitle"><div class="block">' + topLink + chunk2.text +'</div></div></div>')
    content = chunk2.replace(content, '')

    with open(path, 'w') as fp:

# Post-process a package-summary.html file to move the
# complete package description to the top of the page
def _fix_package_summary(path):
    Processes an "overview-summary.html" generated by javadoc to put the complete
    summary text above the Packages table.

    # This uses scraping and so will break if the relevant content produced by javadoc changes in any way!
    with open(path) as fp:
        content =

    chunk1 = Chunk(content, """<div class="header">
<h1 title="Package" class="title">Package""", """<p>See:&nbsp;<a href="#package_description">Description</a></p>

    chunk2 = Chunk(content, """<a name="package_description">
<!--   -->
</a>""", """</div>
<!-- ======= START OF BOTTOM NAVBAR ====== -->""")

    if chunk1.text:
        if chunk2.text:
            repl = re.sub(r'<h2 title=(.*) Description</h2>', r'<h1 title=\1</h1>', chunk2.text, 1)
            content = chunk1.replace(content, '<div class="header">' + repl +'</div></div>')
            content = chunk2.replace(content, '')
            with open(path, 'w') as fp:
            log('warning: Could not find package description detail section in ' + path)

        # no package description given

def site(args):
    """creates a website containing javadoc and the project dependency graph"""

    parser = ArgumentParser(prog='site')
    parser.add_argument('-d', '--base', action='store', help='directory for generated site', required=True, metavar='<dir>')
    parser.add_argument('--name', action='store', help='name of overall documentation', required=True, metavar='<name>')
    parser.add_argument('--overview', action='store', help='path to the overview content for overall documentation', required=True, metavar='<path>')
    parser.add_argument('--projects', action='store', help='comma separated projects to process (omit to process all projects)')
    parser.add_argument('--jd', action='append', help='extra Javadoc arguments (e.g. --jd @-use)', metavar='@<arg>', default=[])
    parser.add_argument('--exclude-packages', action='store', help='comma separated packages to exclude', metavar='<pkgs>')
    parser.add_argument('--dot-output-base', action='store', help='base file name (relative to <dir>/all) for project dependency graph .svg and .jpg files generated by dot (omit to disable dot generation)', metavar='<path>')
    parser.add_argument('--title', action='store', help='value used for -windowtitle and -doctitle javadoc args for overall documentation (default: "<name>")', metavar='<title>')
    args = parser.parse_args(args)

    args.base = os.path.abspath(args.base)
    tmpbase = tempfile.mkdtemp(prefix=basename(args.base) + '.', dir=dirname(args.base))
    unified = join(tmpbase, 'all')

    exclude_packages_arg = []
    if args.exclude_packages is not None:
        exclude_packages_arg = ['--exclude-packages', args.exclude_packages]

    projects = sorted_deps()
    projects_arg = []
    if args.projects is not None:
        projects_arg = ['--projects', args.projects]
        projects = [project(name) for name in args.projects.split(',')]

    extra_javadoc_args = []
    for a in args.jd:
        extra_javadoc_args.append('@' + a)

        # Create javadoc for each project
        javadoc(['--base', tmpbase] + exclude_packages_arg + projects_arg + extra_javadoc_args)

        # Create unified javadoc for all projects
        with open(args.overview) as fp:
            content =
            idx = content.rfind('</body>')
            if idx != -1:
                args.overview = join(tmpbase, 'overview_with_projects.html')
                with open(args.overview, 'w') as fp2:
                    print >> fp2, content[0:idx]
                    print >> fp2, """<div class="contentContainer">
<table class="overviewSummary" border="0" cellpadding="3" cellspacing="0" summary="Projects table">
<caption><span>Projects</span><span class="tabEnd">&nbsp;</span></caption>
<tr><th class="colFirst" scope="col">Project</th><th class="colLast" scope="col">&nbsp;</th></tr>
                    color = 'row'
                    for p in projects:
                        print >> fp2, '<tr class="{1}Color"><td class="colFirst"><a href="../{0}/javadoc/index.html", target = "_top">{0}</a></td><td class="colLast">&nbsp;</td></tr>'.format(, color)
                        color = 'row' if color == 'alt' else 'alt'
                    print >> fp2, '</tbody></table></div>'
                    print >> fp2, content[idx:]
        title = args.title if args.title is not None else
        javadoc(['--base', tmpbase,
                 '--arg', '@-windowtitle', '--arg', '@' + title,
                 '--arg', '@-doctitle', '--arg', '@' + title,
                 '--arg', '@-overview', '--arg', '@' + args.overview] + exclude_packages_arg + projects_arg + extra_javadoc_args)
        os.rename(join(tmpbase, 'javadoc'), unified)

        # Generate dependency graph with Graphviz
        if args.dot_output_base is not None:
            dotErr = None
                if not 'version' in subprocess.check_output(['dot', '-V'], stderr=subprocess.STDOUT):
                    dotErr = 'dot -V does not print a string containing "version"'
            except subprocess.CalledProcessError as e:
                dotErr = 'error calling "dot -V": {}'.format(e) 
            except OSError as e:
                dotErr = 'error calling "dot -V": {}'.format(e)
            if dotErr != None:
                abort('cannot generate dependency graph: ' + dotErr)

            dot = join(tmpbase, 'all', str(args.dot_output_base) + '.dot')
            svg = join(tmpbase, 'all', str(args.dot_output_base) + '.svg')
            jpg = join(tmpbase, 'all', str(args.dot_output_base) + '.jpg')
            html = join(tmpbase, 'all', str(args.dot_output_base) + '.html')
            with open(dot, 'w') as fp:
                dim = len(projects)
                print >> fp, 'digraph projects {'
                print >> fp, 'rankdir=BT;'
                print >> fp, 'size = "' + str(dim) + ',' + str(dim) + '";'
                print >> fp, 'node [shape=rect, fontcolor="blue"];'
                #print >> fp, 'edge [color="green"];'
                for p in projects:
                    print >> fp, '"' + + '" [URL = "../' + + '/javadoc/index.html", target = "_top"]'
                    for dep in p.canonical_deps():
                        if dep in [ for proj in projects]:
                            print >> fp, '"' + + '" -> "' + dep + '"'
                depths = dict()
                for p in projects:
                    d = p.max_depth()
                    depths.setdefault(d, list()).append(
                print >> fp, '}'
            run(['dot', '-Tsvg', '-o' + svg, '-Tjpg', '-o' + jpg, dot])

            # Post-process generated SVG to remove title elements which most browsers
            # render as redundant (and annoying) tooltips.
            with open(svg, 'r') as fp:
                content =
            content = re.sub('<title>.*</title>', '', content)
            content = re.sub('xlink:title="[^"]*"', '', content)
            with open(svg, 'w') as fp:
            # Create HTML that embeds the svg file in an <object> frame
            with open(html, 'w') as fp:
                print >> fp, '<html><body><object data="{}.svg" type="image/svg+xml"></object></body></html>'.format(args.dot_output_base)

        top = join(tmpbase, 'all', 'overview-summary.html')
        for root, _, files in os.walk(tmpbase):
            for f in files:
                if f == 'overview-summary.html':
                    path = join(root, f)
                    topLink = ''
                    if top != path:
                        link = os.path.relpath(join(tmpbase, 'all', 'index.html'), dirname(path))
                        topLink = '<p><a href="' + link + '", target="_top"><b>[return to the overall ' + + ' documentation]</b></a></p>'
                    _fix_overview_summary(path, topLink)
                elif f == 'package-summary.html':
                    path = join(root, f)

        if exists(args.base):
        shutil.move(tmpbase, args.base)

        print 'Created website - root is ' + join(args.base, 'all', 'index.html')

        if exists(tmpbase):

def findclass(args):
    """find all classes matching a given substring"""

    for entry, filename in classpath_walk(includeBootClasspath=True):
        if filename.endswith('.class'):
            if isinstance(entry, zipfile.ZipFile):
                classname = filename.replace('/', '.')
                classname = filename.replace(os.sep, '.')
            classname = classname[:-len('.class')]
            for a in args:
                if a in classname:

def javap(args):
    """launch javap with a -classpath option denoting all available classes

    Run the JDK javap class file disassembler with the following prepended options:

        -private -verbose -classpath <path to project classes>"""

    javap = java().javap
    if not exists(javap):
        abort('The javap executable does not exists: ' + javap)
        run([javap, '-private', '-verbose', '-classpath', classpath()] + args)

def show_projects(args):
    """show all loaded projects"""
    for s in suites():
        projectsFile = join(s.dir, 'mx', 'projects')
        if exists(projectsFile):
            for p in s.projects:
                log('\t' +

def add_argument(*args, **kwargs):
    Define how a single command-line argument.
    assert _argParser is not None
    _argParser.add_argument(*args, **kwargs)

# Table of commands in alphabetical order.
# Keys are command names, value are lists: [<function>, <usage msg>, <format args to doc string of function>...]
# If any of the format args are instances of Callable, then they are called with an 'env' are before being
# used in the call to str.format().
# Extensions should update this table directly
commands = {
    'about': [about, ''],
    'build': [build, '[options]'],
    'checkstyle': [checkstyle, ''],
    'canonicalizeprojects': [canonicalizeprojects, ''],
    'clean': [clean, ''],
    'eclipseinit': [eclipseinit, ''],
    'eclipseformat': [eclipseformat, ''],
    'findclass': [findclass, ''],
    'fsckprojects': [fsckprojects, ''],
    'help': [help_, '[command]'],
    'ideclean': [ideclean, ''],
    'ideinit': [ideinit, ''],
    'archive': [archive, '[options]'],
    'projectgraph': [projectgraph, ''],
    'javap': [javap, ''],
    'javadoc': [javadoc, '[options]'],
    'site': [site, '[options]'],
    'netbeansinit': [netbeansinit, ''],
    'projects': [show_projects, ''],

_argParser = ArgParser()

def _findPrimarySuite():
    # try current working directory first
    mxDir = join(os.getcwd(), 'mx')
    if exists(mxDir) and isdir(mxDir):
        return dirname(mxDir)

    # now search path of my executable
    me = sys.argv[0]
    parent = dirname(me)
    while parent:
        mxDir = join(parent, 'mx')
        if exists(mxDir) and isdir(mxDir):
            return parent
        parent = dirname(parent)
    return None

def main():
    primarySuiteDir = _findPrimarySuite()
    if primarySuiteDir:
        global _mainSuite
        _mainSuite = _loadSuite(primarySuiteDir, True)

    opts, commandAndArgs = _argParser._parse_cmd_line()

    global _opts, _java
    _opts = opts
    _java = JavaConfig(opts)

    for s in suites():

    if len(commandAndArgs) == 0:

    command = commandAndArgs[0]
    command_args = commandAndArgs[1:]

    if not commands.has_key(command):
        hits = [c for c in commands.iterkeys() if c.startswith(command)]
        if len(hits) == 1:
            command = hits[0]
        elif len(hits) == 0:
            abort('mx: unknown command \'{0}\'\n{1}use "mx help" for more options'.format(command, _format_commands()))
            abort('mx: command \'{0}\' is ambiguous\n    {1}'.format(command, ' '.join(hits)))

    c, _ = commands[command][:2]
    def term_handler(signum, frame):
    signal.signal(signal.SIGTERM, term_handler)
        if opts.timeout != 0:
            def alarm_handler(signum, frame):
                abort('Command timed out after ' + str(opts.timeout) + ' seconds: ' + ' '.join(commandAndArgs))
            signal.signal(signal.SIGALRM, alarm_handler)
        retcode = c(command_args)
        if retcode is not None and retcode != 0:
    except KeyboardInterrupt:
        # no need to show the stack trace when the user presses CTRL-C

if __name__ == '__main__':
    # rename this module as 'mx' so it is not imported twice by the modules
    sys.modules['mx'] = sys.modules.pop('__main__')