Page MenuHomeHEPForge

No OneTemporary

This file is larger than 256 KB, so syntax highlighting was skipped.
Index: trunk/config.log
===================================================================
--- trunk/config.log (revision 101)
+++ trunk/config.log (revision 102)
@@ -1,906 +1,906 @@
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by npstat configure 2.2.0, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ ./configure --with-pic
## --------- ##
## Platform. ##
## --------- ##
hostname = sam
uname -m = x86_64
uname -r = 3.9.4-200.fc18.x86_64
uname -s = Linux
uname -v = #1 SMP Fri May 24 20:10:49 UTC 2013
/usr/bin/uname -p = x86_64
/bin/uname -X = unknown
/bin/arch = x86_64
/usr/bin/arch -k = unknown
/usr/convex/getsysinfo = unknown
/usr/bin/hostinfo = unknown
/bin/machine = unknown
/usr/bin/oslevel = unknown
/bin/universe = unknown
PATH: /home/igv/local/bin
PATH: /home/igv/.local/bin
PATH: /home/igv/bin
PATH: /usr/local/bin
PATH: /usr/local/root/bin
PATH: /usr/local/bin
PATH: /usr/kerberos/bin
PATH: /bin
PATH: /usr/bin
PATH: /usr/X11R6/bin
PATH: /usr/local/dx/bin
PATH: /sbin
PATH: /usr/sbin
PATH: /usr/local/cmake/bin
PATH: /usr/local/kompozer
PATH: .
## ----------- ##
## Core tests. ##
## ----------- ##
configure:2394: checking for a BSD-compatible install
configure:2462: result: /bin/install -c
configure:2473: checking whether build environment is sane
configure:2528: result: yes
configure:2679: checking for a thread-safe mkdir -p
configure:2718: result: /bin/mkdir -p
configure:2725: checking for gawk
configure:2741: found /bin/gawk
configure:2752: result: gawk
configure:2763: checking whether make sets $(MAKE)
configure:2785: result: yes
configure:2928: checking for pkg-config
configure:2946: found /bin/pkg-config
configure:2958: result: /bin/pkg-config
configure:2983: checking pkg-config is at least version 0.9.0
configure:2986: result: yes
configure:2996: checking for DEPS
configure:3003: $PKG_CONFIG --exists --print-errors "fftw3 >= 3.1.2 geners >= 1.3.0"
configure:3006: $? = 0
configure:3020: $PKG_CONFIG --exists --print-errors "fftw3 >= 3.1.2 geners >= 1.3.0"
configure:3023: $? = 0
configure:3081: result: yes
configure:3144: checking for g++
configure:3160: found /bin/g++
configure:3171: result: g++
configure:3198: checking for C++ compiler version
configure:3207: g++ --version >&5
g++ (GCC) 4.7.2 20121109 (Red Hat 4.7.2-8)
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
configure:3218: $? = 0
configure:3207: g++ -v >&5
Using built-in specs.
COLLECT_GCC=g++
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --disable-build-with-cxx --disable-build-poststage1-with-cxx --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-initfini-array --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
gcc version 4.7.2 20121109 (Red Hat 4.7.2-8) (GCC)
configure:3218: $? = 0
configure:3207: g++ -V >&5
g++: error: unrecognized command line option '-V'
g++: fatal error: no input files
compilation terminated.
configure:3218: $? = 4
configure:3207: g++ -qversion >&5
g++: error: unrecognized command line option '-qversion'
g++: fatal error: no input files
compilation terminated.
configure:3218: $? = 4
configure:3238: checking whether the C++ compiler works
configure:3260: g++ -std=c++0x -g -O0 -Wall -W -Werror conftest.cpp >&5
configure:3264: $? = 0
configure:3312: result: yes
configure:3315: checking for C++ compiler default output file name
configure:3317: result: a.out
configure:3323: checking for suffix of executables
configure:3330: g++ -o conftest -std=c++0x -g -O0 -Wall -W -Werror conftest.cpp >&5
configure:3334: $? = 0
configure:3356: result:
configure:3378: checking whether we are cross compiling
configure:3386: g++ -o conftest -std=c++0x -g -O0 -Wall -W -Werror conftest.cpp >&5
configure:3390: $? = 0
configure:3397: ./conftest
configure:3401: $? = 0
configure:3416: result: no
configure:3421: checking for suffix of object files
configure:3443: g++ -c -std=c++0x -g -O0 -Wall -W -Werror conftest.cpp >&5
configure:3447: $? = 0
configure:3468: result: o
configure:3472: checking whether we are using the GNU C++ compiler
configure:3491: g++ -c -std=c++0x -g -O0 -Wall -W -Werror conftest.cpp >&5
configure:3491: $? = 0
configure:3500: result: yes
configure:3509: checking whether g++ accepts -g
configure:3529: g++ -c -g conftest.cpp >&5
configure:3529: $? = 0
configure:3570: result: yes
configure:3604: checking for style of include used by make
configure:3632: result: GNU
configure:3658: checking dependency style of g++
configure:3769: result: gcc3
configure:3838: checking for g77
configure:3854: found /home/igv/bin/g77
configure:3865: result: g77
configure:3891: checking for Fortran 77 compiler version
configure:3900: g77 --version >&5
GNU Fortran (GCC) 4.7.2 20121109 (Red Hat 4.7.2-8)
Copyright (C) 2012 Free Software Foundation, Inc.
GNU Fortran comes with NO WARRANTY, to the extent permitted by law.
You may redistribute copies of GNU Fortran
under the terms of the GNU General Public License.
For more information about these matters, see the file named COPYING
configure:3911: $? = 0
configure:3900: g77 -v >&5
Using built-in specs.
COLLECT_GCC=g77
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --disable-build-with-cxx --disable-build-poststage1-with-cxx --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-initfini-array --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
gcc version 4.7.2 20121109 (Red Hat 4.7.2-8) (GCC)
configure:3911: $? = 0
configure:3900: g77 -V >&5
g77: error: unrecognized command line option '-V'
g77: fatal error: no input files
compilation terminated.
configure:3911: $? = 4
configure:3900: g77 -qversion >&5
g77: error: unrecognized command line option '-qversion'
g77: fatal error: no input files
compilation terminated.
configure:3911: $? = 4
configure:3920: checking whether we are using the GNU Fortran 77 compiler
configure:3933: g77 -c conftest.F >&5
configure:3933: $? = 0
configure:3942: result: yes
configure:3948: checking whether g77 accepts -g
configure:3959: g77 -c -g conftest.f >&5
configure:3959: $? = 0
configure:3967: result: yes
configure:4000: checking build system type
configure:4014: result: x86_64-unknown-linux-gnu
configure:4034: checking host system type
configure:4047: result: x86_64-unknown-linux-gnu
configure:4072: checking how to get verbose linking output from g77
configure:4082: g77 -c -g -O2 conftest.f >&5
configure:4082: $? = 0
configure:4100: g77 -o conftest -g -O2 -v conftest.f
Using built-in specs.
Target: x86_64-redhat-linux
Thread model: posix
gcc version 4.7.2 20121109 (Red Hat 4.7.2-8) (GCC)
- /usr/libexec/gcc/x86_64-redhat-linux/4.7.2/f951 conftest.f -ffixed-form -quiet -dumpbase conftest.f -mtune=generic -march=x86-64 -auxbase conftest -g -O2 -version -fintrinsic-modules-path /usr/lib/gcc/x86_64-redhat-linux/4.7.2/finclude -o /tmp/ccBPZb0n.s
+ /usr/libexec/gcc/x86_64-redhat-linux/4.7.2/f951 conftest.f -ffixed-form -quiet -dumpbase conftest.f -mtune=generic -march=x86-64 -auxbase conftest -g -O2 -version -fintrinsic-modules-path /usr/lib/gcc/x86_64-redhat-linux/4.7.2/finclude -o /tmp/ccwRsD40.s
GNU Fortran (GCC) version 4.7.2 20121109 (Red Hat 4.7.2-8) (x86_64-redhat-linux)
compiled by GNU C version 4.7.2 20121109 (Red Hat 4.7.2-8), GMP version 5.0.5, MPFR version 3.1.1, MPC version 0.9
GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
GNU Fortran (GCC) version 4.7.2 20121109 (Red Hat 4.7.2-8) (x86_64-redhat-linux)
compiled by GNU C version 4.7.2 20121109 (Red Hat 4.7.2-8), GMP version 5.0.5, MPFR version 3.1.1, MPC version 0.9
GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
- as -v --64 -o /tmp/ccuHszE8.o /tmp/ccBPZb0n.s
+ as -v --64 -o /tmp/ccn43tiQ.o /tmp/ccwRsD40.s
GNU assembler version 2.23.51.0.1 (x86_64-redhat-linux) using BFD version version 2.23.51.0.1-6.fc18 20120806
Reading specs from /usr/lib/gcc/x86_64-redhat-linux/4.7.2/libgfortran.spec
rename spec lib to liborig
- /usr/libexec/gcc/x86_64-redhat-linux/4.7.2/collect2 --build-id --no-add-needed --eh-frame-hdr --hash-style=gnu -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o conftest /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64/crt1.o /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64/crti.o /usr/lib/gcc/x86_64-redhat-linux/4.7.2/crtbegin.o -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../.. /tmp/ccuHszE8.o -lgfortran -lm -lgcc_s -lgcc -lquadmath -lm -lgcc_s -lgcc -lc -lgcc_s -lgcc /usr/lib/gcc/x86_64-redhat-linux/4.7.2/crtend.o /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64/crtn.o
+ /usr/libexec/gcc/x86_64-redhat-linux/4.7.2/collect2 --build-id --no-add-needed --eh-frame-hdr --hash-style=gnu -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o conftest /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64/crt1.o /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64/crti.o /usr/lib/gcc/x86_64-redhat-linux/4.7.2/crtbegin.o -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../.. /tmp/ccn43tiQ.o -lgfortran -lm -lgcc_s -lgcc -lquadmath -lm -lgcc_s -lgcc -lc -lgcc_s -lgcc /usr/lib/gcc/x86_64-redhat-linux/4.7.2/crtend.o /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64/crtn.o
configure:4183: result: -v
configure:4185: checking for Fortran 77 libraries of g77
configure:4208: g77 -o conftest -g -O2 -v conftest.f
Using built-in specs.
Target: x86_64-redhat-linux
Thread model: posix
gcc version 4.7.2 20121109 (Red Hat 4.7.2-8) (GCC)
- /usr/libexec/gcc/x86_64-redhat-linux/4.7.2/f951 conftest.f -ffixed-form -quiet -dumpbase conftest.f -mtune=generic -march=x86-64 -auxbase conftest -g -O2 -version -fintrinsic-modules-path /usr/lib/gcc/x86_64-redhat-linux/4.7.2/finclude -o /tmp/ccoQnJUs.s
+ /usr/libexec/gcc/x86_64-redhat-linux/4.7.2/f951 conftest.f -ffixed-form -quiet -dumpbase conftest.f -mtune=generic -march=x86-64 -auxbase conftest -g -O2 -version -fintrinsic-modules-path /usr/lib/gcc/x86_64-redhat-linux/4.7.2/finclude -o /tmp/cctCHUt6.s
GNU Fortran (GCC) version 4.7.2 20121109 (Red Hat 4.7.2-8) (x86_64-redhat-linux)
compiled by GNU C version 4.7.2 20121109 (Red Hat 4.7.2-8), GMP version 5.0.5, MPFR version 3.1.1, MPC version 0.9
GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
GNU Fortran (GCC) version 4.7.2 20121109 (Red Hat 4.7.2-8) (x86_64-redhat-linux)
compiled by GNU C version 4.7.2 20121109 (Red Hat 4.7.2-8), GMP version 5.0.5, MPFR version 3.1.1, MPC version 0.9
GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
- as -v --64 -o /tmp/ccpMSGJd.o /tmp/ccoQnJUs.s
+ as -v --64 -o /tmp/ccRGNUUV.o /tmp/cctCHUt6.s
GNU assembler version 2.23.51.0.1 (x86_64-redhat-linux) using BFD version version 2.23.51.0.1-6.fc18 20120806
Reading specs from /usr/lib/gcc/x86_64-redhat-linux/4.7.2/libgfortran.spec
rename spec lib to liborig
- /usr/libexec/gcc/x86_64-redhat-linux/4.7.2/collect2 --build-id --no-add-needed --eh-frame-hdr --hash-style=gnu -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o conftest /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64/crt1.o /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64/crti.o /usr/lib/gcc/x86_64-redhat-linux/4.7.2/crtbegin.o -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../.. /tmp/ccpMSGJd.o -lgfortran -lm -lgcc_s -lgcc -lquadmath -lm -lgcc_s -lgcc -lc -lgcc_s -lgcc /usr/lib/gcc/x86_64-redhat-linux/4.7.2/crtend.o /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64/crtn.o
+ /usr/libexec/gcc/x86_64-redhat-linux/4.7.2/collect2 --build-id --no-add-needed --eh-frame-hdr --hash-style=gnu -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o conftest /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64/crt1.o /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64/crti.o /usr/lib/gcc/x86_64-redhat-linux/4.7.2/crtbegin.o -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../.. /tmp/ccRGNUUV.o -lgfortran -lm -lgcc_s -lgcc -lquadmath -lm -lgcc_s -lgcc -lc -lgcc_s -lgcc /usr/lib/gcc/x86_64-redhat-linux/4.7.2/crtend.o /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64/crtn.o
configure:4404: result: -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../.. -lgfortran -lm -lquadmath
configure:4466: checking how to print strings
configure:4493: result: printf
configure:4562: checking for gcc
configure:4578: found /bin/gcc
configure:4589: result: gcc
configure:4818: checking for C compiler version
configure:4827: gcc --version >&5
gcc (GCC) 4.7.2 20121109 (Red Hat 4.7.2-8)
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
configure:4838: $? = 0
configure:4827: gcc -v >&5
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --disable-build-with-cxx --disable-build-poststage1-with-cxx --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-initfini-array --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
gcc version 4.7.2 20121109 (Red Hat 4.7.2-8) (GCC)
configure:4838: $? = 0
configure:4827: gcc -V >&5
gcc: error: unrecognized command line option '-V'
gcc: fatal error: no input files
compilation terminated.
configure:4838: $? = 4
configure:4827: gcc -qversion >&5
gcc: error: unrecognized command line option '-qversion'
gcc: fatal error: no input files
compilation terminated.
configure:4838: $? = 4
configure:4842: checking whether we are using the GNU C compiler
configure:4861: gcc -c conftest.c >&5
configure:4861: $? = 0
configure:4870: result: yes
configure:4879: checking whether gcc accepts -g
configure:4899: gcc -c -g conftest.c >&5
configure:4899: $? = 0
configure:4940: result: yes
configure:4957: checking for gcc option to accept ISO C89
configure:5020: gcc -c -g -O2 conftest.c >&5
configure:5020: $? = 0
configure:5033: result: none needed
configure:5055: checking dependency style of gcc
configure:5166: result: gcc3
configure:5181: checking for a sed that does not truncate output
configure:5245: result: /bin/sed
configure:5263: checking for grep that handles long lines and -e
configure:5321: result: /bin/grep
configure:5326: checking for egrep
configure:5388: result: /bin/grep -E
configure:5393: checking for fgrep
configure:5455: result: /bin/grep -F
configure:5490: checking for ld used by gcc
configure:5557: result: /bin/ld
configure:5564: checking if the linker (/bin/ld) is GNU ld
configure:5579: result: yes
configure:5591: checking for BSD- or MS-compatible name lister (nm)
configure:5640: result: /bin/nm -B
configure:5770: checking the name lister (/bin/nm -B) interface
configure:5777: gcc -c -g -O2 conftest.c >&5
configure:5780: /bin/nm -B "conftest.o"
configure:5783: output
0000000000000000 B some_variable
configure:5790: result: BSD nm
configure:5793: checking whether ln -s works
configure:5797: result: yes
configure:5805: checking the maximum length of command line arguments
configure:5935: result: 1572864
configure:5952: checking whether the shell understands some XSI constructs
configure:5962: result: yes
configure:5966: checking whether the shell understands "+="
configure:5972: result: yes
configure:6007: checking how to convert x86_64-unknown-linux-gnu file names to x86_64-unknown-linux-gnu format
configure:6047: result: func_convert_file_noop
configure:6054: checking how to convert x86_64-unknown-linux-gnu file names to toolchain format
configure:6074: result: func_convert_file_noop
configure:6081: checking for /bin/ld option to reload object files
configure:6088: result: -r
configure:6162: checking for objdump
configure:6178: found /bin/objdump
configure:6189: result: objdump
configure:6221: checking how to recognize dependent libraries
configure:6423: result: pass_all
configure:6508: checking for dlltool
configure:6538: result: no
configure:6568: checking how to associate runtime and link libraries
configure:6595: result: printf %s\n
configure:6656: checking for ar
configure:6672: found /bin/ar
configure:6683: result: ar
configure:6720: checking for archiver @FILE support
configure:6737: gcc -c -g -O2 conftest.c >&5
configure:6737: $? = 0
configure:6740: ar cru libconftest.a @conftest.lst >&5
configure:6743: $? = 0
configure:6748: ar cru libconftest.a @conftest.lst >&5
ar: conftest.o: No such file or directory
configure:6751: $? = 1
configure:6763: result: @
configure:6821: checking for strip
configure:6837: found /bin/strip
configure:6848: result: strip
configure:6920: checking for ranlib
configure:6936: found /bin/ranlib
configure:6947: result: ranlib
configure:7049: checking command to parse /bin/nm -B output from gcc object
configure:7169: gcc -c -g -O2 conftest.c >&5
configure:7172: $? = 0
configure:7176: /bin/nm -B conftest.o \| sed -n -e 's/^.*[ ]\([ABCDGIRSTW][ABCDGIRSTW]*\)[ ][ ]*\([_A-Za-z][_A-Za-z0-9]*\)$/\1 \2 \2/p' | sed '/ __gnu_lto/d' \> conftest.nm
configure:7179: $? = 0
configure:7245: gcc -o conftest -g -O2 conftest.c conftstm.o >&5
configure:7248: $? = 0
configure:7286: result: ok
configure:7323: checking for sysroot
configure:7353: result: no
configure:7430: gcc -c -g -O2 conftest.c >&5
configure:7433: $? = 0
configure:7609: checking for mt
configure:7639: result: no
configure:7659: checking if : is a manifest tool
configure:7665: : '-?'
configure:7673: result: no
configure:8315: checking how to run the C preprocessor
configure:8346: gcc -E conftest.c
configure:8346: $? = 0
configure:8360: gcc -E conftest.c
conftest.c:11:28: fatal error: ac_nonexistent.h: No such file or directory
compilation terminated.
configure:8360: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "npstat"
| #define PACKAGE_TARNAME "npstat"
| #define PACKAGE_VERSION "2.2.0"
| #define PACKAGE_STRING "npstat 2.2.0"
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE_URL ""
| #define PACKAGE "npstat"
| #define VERSION "2.2.0"
| /* end confdefs.h. */
| #include <ac_nonexistent.h>
configure:8385: result: gcc -E
configure:8405: gcc -E conftest.c
configure:8405: $? = 0
configure:8419: gcc -E conftest.c
conftest.c:11:28: fatal error: ac_nonexistent.h: No such file or directory
compilation terminated.
configure:8419: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "npstat"
| #define PACKAGE_TARNAME "npstat"
| #define PACKAGE_VERSION "2.2.0"
| #define PACKAGE_STRING "npstat 2.2.0"
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE_URL ""
| #define PACKAGE "npstat"
| #define VERSION "2.2.0"
| /* end confdefs.h. */
| #include <ac_nonexistent.h>
configure:8448: checking for ANSI C header files
configure:8468: gcc -c -g -O2 conftest.c >&5
configure:8468: $? = 0
configure:8541: gcc -o conftest -g -O2 conftest.c >&5
configure:8541: $? = 0
configure:8541: ./conftest
configure:8541: $? = 0
configure:8552: result: yes
configure:8565: checking for sys/types.h
configure:8565: gcc -c -g -O2 conftest.c >&5
configure:8565: $? = 0
configure:8565: result: yes
configure:8565: checking for sys/stat.h
configure:8565: gcc -c -g -O2 conftest.c >&5
configure:8565: $? = 0
configure:8565: result: yes
configure:8565: checking for stdlib.h
configure:8565: gcc -c -g -O2 conftest.c >&5
configure:8565: $? = 0
configure:8565: result: yes
configure:8565: checking for string.h
configure:8565: gcc -c -g -O2 conftest.c >&5
configure:8565: $? = 0
configure:8565: result: yes
configure:8565: checking for memory.h
configure:8565: gcc -c -g -O2 conftest.c >&5
configure:8565: $? = 0
configure:8565: result: yes
configure:8565: checking for strings.h
configure:8565: gcc -c -g -O2 conftest.c >&5
configure:8565: $? = 0
configure:8565: result: yes
configure:8565: checking for inttypes.h
configure:8565: gcc -c -g -O2 conftest.c >&5
configure:8565: $? = 0
configure:8565: result: yes
configure:8565: checking for stdint.h
configure:8565: gcc -c -g -O2 conftest.c >&5
configure:8565: $? = 0
configure:8565: result: yes
configure:8565: checking for unistd.h
configure:8565: gcc -c -g -O2 conftest.c >&5
configure:8565: $? = 0
configure:8565: result: yes
configure:8579: checking for dlfcn.h
configure:8579: gcc -c -g -O2 conftest.c >&5
configure:8579: $? = 0
configure:8579: result: yes
configure:8796: checking for objdir
configure:8811: result: .libs
configure:9082: checking if gcc supports -fno-rtti -fno-exceptions
configure:9100: gcc -c -g -O2 -fno-rtti -fno-exceptions conftest.c >&5
cc1: warning: command line option '-fno-rtti' is valid for C++/ObjC++ but not for C [enabled by default]
configure:9104: $? = 0
configure:9117: result: no
configure:9444: checking for gcc option to produce PIC
configure:9451: result: -fPIC -DPIC
configure:9459: checking if gcc PIC flag -fPIC -DPIC works
configure:9477: gcc -c -g -O2 -fPIC -DPIC -DPIC conftest.c >&5
configure:9481: $? = 0
configure:9494: result: yes
configure:9523: checking if gcc static flag -static works
configure:9551: result: no
configure:9566: checking if gcc supports -c -o file.o
configure:9587: gcc -c -g -O2 -o out/conftest2.o conftest.c >&5
configure:9591: $? = 0
configure:9613: result: yes
configure:9621: checking if gcc supports -c -o file.o
configure:9668: result: yes
configure:9701: checking whether the gcc linker (/bin/ld -m elf_x86_64) supports shared libraries
configure:10854: result: yes
configure:10891: checking whether -lc should be explicitly linked in
configure:10899: gcc -c -g -O2 conftest.c >&5
configure:10902: $? = 0
configure:10917: gcc -shared -fPIC -DPIC conftest.o -v -Wl,-soname -Wl,conftest -o conftest 2\>\&1 \| /bin/grep -lc \>/dev/null 2\>\&1
configure:10920: $? = 0
configure:10934: result: no
configure:11094: checking dynamic linker characteristics
configure:11605: gcc -o conftest -g -O2 -Wl,-rpath -Wl,/foo conftest.c >&5
configure:11605: $? = 0
configure:11831: result: GNU/Linux ld.so
configure:11938: checking how to hardcode library paths into programs
configure:11963: result: immediate
configure:12503: checking whether stripping libraries is possible
configure:12508: result: yes
configure:12543: checking if libtool supports shared libraries
configure:12545: result: yes
configure:12548: checking whether to build shared libraries
configure:12569: result: yes
configure:12572: checking whether to build static libraries
configure:12576: result: yes
configure:12599: checking how to run the C++ preprocessor
configure:12626: g++ -E conftest.cpp
configure:12626: $? = 0
configure:12640: g++ -E conftest.cpp
conftest.cpp:23:28: fatal error: ac_nonexistent.h: No such file or directory
compilation terminated.
configure:12640: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "npstat"
| #define PACKAGE_TARNAME "npstat"
| #define PACKAGE_VERSION "2.2.0"
| #define PACKAGE_STRING "npstat 2.2.0"
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE_URL ""
| #define PACKAGE "npstat"
| #define VERSION "2.2.0"
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define HAVE_DLFCN_H 1
| #define LT_OBJDIR ".libs/"
| /* end confdefs.h. */
| #include <ac_nonexistent.h>
configure:12665: result: g++ -E
configure:12685: g++ -E conftest.cpp
configure:12685: $? = 0
configure:12699: g++ -E conftest.cpp
conftest.cpp:23:28: fatal error: ac_nonexistent.h: No such file or directory
compilation terminated.
configure:12699: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "npstat"
| #define PACKAGE_TARNAME "npstat"
| #define PACKAGE_VERSION "2.2.0"
| #define PACKAGE_STRING "npstat 2.2.0"
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE_URL ""
| #define PACKAGE "npstat"
| #define VERSION "2.2.0"
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define HAVE_DLFCN_H 1
| #define LT_OBJDIR ".libs/"
| /* end confdefs.h. */
| #include <ac_nonexistent.h>
configure:12868: checking for ld used by g++
configure:12935: result: /bin/ld -m elf_x86_64
configure:12942: checking if the linker (/bin/ld -m elf_x86_64) is GNU ld
configure:12957: result: yes
configure:13012: checking whether the g++ linker (/bin/ld -m elf_x86_64) supports shared libraries
configure:14017: result: yes
configure:14053: g++ -c -std=c++0x -g -O0 -Wall -W -Werror conftest.cpp >&5
configure:14056: $? = 0
configure:14576: checking for g++ option to produce PIC
configure:14583: result: -fPIC -DPIC
configure:14591: checking if g++ PIC flag -fPIC -DPIC works
configure:14609: g++ -c -std=c++0x -g -O0 -Wall -W -Werror -fPIC -DPIC -DPIC conftest.cpp >&5
configure:14613: $? = 0
configure:14626: result: yes
configure:14649: checking if g++ static flag -static works
configure:14677: result: no
configure:14689: checking if g++ supports -c -o file.o
configure:14710: g++ -c -std=c++0x -g -O0 -Wall -W -Werror -o out/conftest2.o conftest.cpp >&5
configure:14714: $? = 0
configure:14736: result: yes
configure:14741: checking if g++ supports -c -o file.o
configure:14788: result: yes
configure:14818: checking whether the g++ linker (/bin/ld -m elf_x86_64) supports shared libraries
configure:14854: result: yes
configure:14995: checking dynamic linker characteristics
configure:15666: result: GNU/Linux ld.so
configure:15719: checking how to hardcode library paths into programs
configure:15744: result: immediate
configure:15892: checking if libtool supports shared libraries
configure:15894: result: yes
configure:15897: checking whether to build shared libraries
configure:15917: result: yes
configure:15920: checking whether to build static libraries
configure:15924: result: yes
configure:16245: checking for g77 option to produce PIC
configure:16252: result: -fPIC
configure:16260: checking if g77 PIC flag -fPIC works
configure:16278: g77 -c -g -O2 -fPIC conftest.f >&5
configure:16282: $? = 0
configure:16295: result: yes
configure:16318: checking if g77 static flag -static works
configure:16346: result: no
configure:16358: checking if g77 supports -c -o file.o
configure:16379: g77 -c -g -O2 -o out/conftest2.o conftest.f >&5
configure:16383: $? = 0
configure:16405: result: yes
configure:16410: checking if g77 supports -c -o file.o
configure:16457: result: yes
configure:16487: checking whether the g77 linker (/bin/ld -m elf_x86_64) supports shared libraries
configure:17590: result: yes
configure:17731: checking dynamic linker characteristics
configure:18396: result: GNU/Linux ld.so
configure:18449: checking how to hardcode library paths into programs
configure:18474: result: immediate
configure:18674: checking that generated files are newer than configure
configure:18680: result: done
configure:18707: creating ./config.status
## ---------------------- ##
## Running config.status. ##
## ---------------------- ##
This file was extended by npstat config.status 2.2.0, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES =
CONFIG_HEADERS =
CONFIG_LINKS =
CONFIG_COMMANDS =
$ ./config.status
on sam
config.status:1132: creating Makefile
config.status:1132: creating npstat/nm/Makefile
config.status:1132: creating npstat/rng/Makefile
config.status:1132: creating npstat/stat/Makefile
config.status:1132: creating npstat/wrap/Makefile
config.status:1132: creating npstat/interfaces/Makefile
config.status:1132: creating npstat/Makefile
config.status:1132: creating examples/C++/Makefile
config.status:1132: creating npstat.pc
config.status:1304: executing depfiles commands
config.status:1304: executing libtool commands
## ---------------- ##
## Cache variables. ##
## ---------------- ##
ac_cv_build=x86_64-unknown-linux-gnu
ac_cv_c_compiler_gnu=yes
ac_cv_cxx_compiler_gnu=yes
ac_cv_env_CCC_set=
ac_cv_env_CCC_value=
ac_cv_env_CC_set=
ac_cv_env_CC_value=
ac_cv_env_CFLAGS_set=
ac_cv_env_CFLAGS_value=
ac_cv_env_CPPFLAGS_set=
ac_cv_env_CPPFLAGS_value=
ac_cv_env_CPP_set=
ac_cv_env_CPP_value=
ac_cv_env_CXXCPP_set=
ac_cv_env_CXXCPP_value=
ac_cv_env_CXXFLAGS_set=set
ac_cv_env_CXXFLAGS_value='-std=c++0x -g -O0 -Wall -W -Werror'
ac_cv_env_CXX_set=
ac_cv_env_CXX_value=
ac_cv_env_DEPS_CFLAGS_set=
ac_cv_env_DEPS_CFLAGS_value=
ac_cv_env_DEPS_LIBS_set=
ac_cv_env_DEPS_LIBS_value=
ac_cv_env_F77_set=
ac_cv_env_F77_value=
ac_cv_env_FFLAGS_set=
ac_cv_env_FFLAGS_value=
ac_cv_env_LDFLAGS_set=
ac_cv_env_LDFLAGS_value=
ac_cv_env_LIBS_set=
ac_cv_env_LIBS_value=
ac_cv_env_PKG_CONFIG_LIBDIR_set=
ac_cv_env_PKG_CONFIG_LIBDIR_value=
ac_cv_env_PKG_CONFIG_PATH_set=set
ac_cv_env_PKG_CONFIG_PATH_value=/usr/local/lib/pkgconfig
ac_cv_env_PKG_CONFIG_set=
ac_cv_env_PKG_CONFIG_value=
ac_cv_env_build_alias_set=
ac_cv_env_build_alias_value=
ac_cv_env_host_alias_set=
ac_cv_env_host_alias_value=
ac_cv_env_target_alias_set=
ac_cv_env_target_alias_value=
ac_cv_f77_compiler_gnu=yes
ac_cv_f77_libs=' -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../.. -lgfortran -lm -lquadmath'
ac_cv_header_dlfcn_h=yes
ac_cv_header_inttypes_h=yes
ac_cv_header_memory_h=yes
ac_cv_header_stdc=yes
ac_cv_header_stdint_h=yes
ac_cv_header_stdlib_h=yes
ac_cv_header_string_h=yes
ac_cv_header_strings_h=yes
ac_cv_header_sys_stat_h=yes
ac_cv_header_sys_types_h=yes
ac_cv_header_unistd_h=yes
ac_cv_host=x86_64-unknown-linux-gnu
ac_cv_objext=o
ac_cv_path_EGREP='/bin/grep -E'
ac_cv_path_FGREP='/bin/grep -F'
ac_cv_path_GREP=/bin/grep
ac_cv_path_SED=/bin/sed
ac_cv_path_ac_pt_PKG_CONFIG=/bin/pkg-config
ac_cv_path_install='/bin/install -c'
ac_cv_path_mkdir=/bin/mkdir
ac_cv_prog_AWK=gawk
ac_cv_prog_CPP='gcc -E'
ac_cv_prog_CXXCPP='g++ -E'
ac_cv_prog_ac_ct_AR=ar
ac_cv_prog_ac_ct_CC=gcc
ac_cv_prog_ac_ct_CXX=g++
ac_cv_prog_ac_ct_F77=g77
ac_cv_prog_ac_ct_OBJDUMP=objdump
ac_cv_prog_ac_ct_RANLIB=ranlib
ac_cv_prog_ac_ct_STRIP=strip
ac_cv_prog_cc_c89=
ac_cv_prog_cc_g=yes
ac_cv_prog_cxx_g=yes
ac_cv_prog_f77_g=yes
ac_cv_prog_f77_v=-v
ac_cv_prog_make_make_set=yes
am_cv_CC_dependencies_compiler_type=gcc3
am_cv_CXX_dependencies_compiler_type=gcc3
lt_cv_ar_at_file=@
lt_cv_archive_cmds_need_lc=no
lt_cv_deplibs_check_method=pass_all
lt_cv_file_magic_cmd='$MAGIC_CMD'
lt_cv_file_magic_test_file=
lt_cv_ld_reload_flag=-r
lt_cv_nm_interface='BSD nm'
lt_cv_objdir=.libs
lt_cv_path_LD=/bin/ld
lt_cv_path_LDCXX='/bin/ld -m elf_x86_64'
lt_cv_path_NM='/bin/nm -B'
lt_cv_path_mainfest_tool=no
lt_cv_prog_compiler_c_o=yes
lt_cv_prog_compiler_c_o_CXX=yes
lt_cv_prog_compiler_c_o_F77=yes
lt_cv_prog_compiler_pic='-fPIC -DPIC'
lt_cv_prog_compiler_pic_CXX='-fPIC -DPIC'
lt_cv_prog_compiler_pic_F77=-fPIC
lt_cv_prog_compiler_pic_works=yes
lt_cv_prog_compiler_pic_works_CXX=yes
lt_cv_prog_compiler_pic_works_F77=yes
lt_cv_prog_compiler_rtti_exceptions=no
lt_cv_prog_compiler_static_works=no
lt_cv_prog_compiler_static_works_CXX=no
lt_cv_prog_compiler_static_works_F77=no
lt_cv_prog_gnu_ld=yes
lt_cv_prog_gnu_ldcxx=yes
lt_cv_sharedlib_from_linklib_cmd='printf %s\n'
lt_cv_shlibpath_overrides_runpath=no
lt_cv_sys_global_symbol_pipe='sed -n -e '\''s/^.*[ ]\([ABCDGIRSTW][ABCDGIRSTW]*\)[ ][ ]*\([_A-Za-z][_A-Za-z0-9]*\)$/\1 \2 \2/p'\'' | sed '\''/ __gnu_lto/d'\'''
lt_cv_sys_global_symbol_to_c_name_address='sed -n -e '\''s/^: \([^ ]*\)[ ]*$/ {\"\1\", (void *) 0},/p'\'' -e '\''s/^[ABCDGIRSTW]* \([^ ]*\) \([^ ]*\)$/ {"\2", (void *) \&\2},/p'\'''
lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='sed -n -e '\''s/^: \([^ ]*\)[ ]*$/ {\"\1\", (void *) 0},/p'\'' -e '\''s/^[ABCDGIRSTW]* \([^ ]*\) \(lib[^ ]*\)$/ {"\2", (void *) \&\2},/p'\'' -e '\''s/^[ABCDGIRSTW]* \([^ ]*\) \([^ ]*\)$/ {"lib\2", (void *) \&\2},/p'\'''
lt_cv_sys_global_symbol_to_cdecl='sed -n -e '\''s/^T .* \(.*\)$/extern int \1();/p'\'' -e '\''s/^[ABCDGIRSTW]* .* \(.*\)$/extern char \1;/p'\'''
lt_cv_sys_max_cmd_len=1572864
lt_cv_to_host_file_cmd=func_convert_file_noop
lt_cv_to_tool_file_cmd=func_convert_file_noop
pkg_cv_DEPS_CFLAGS='-I/usr/local/include '
pkg_cv_DEPS_LIBS='-L/usr/local/lib -lfftw3 -lm -lgeners '
## ----------------- ##
## Output variables. ##
## ----------------- ##
ACLOCAL='${SHELL} /home/igv/Hepforge/npstat/trunk/missing --run aclocal-1.12'
AMDEPBACKSLASH='\'
AMDEP_FALSE='#'
AMDEP_TRUE=''
AMTAR='$${TAR-tar}'
AR='ar'
AUTOCONF='${SHELL} /home/igv/Hepforge/npstat/trunk/missing --run autoconf'
AUTOHEADER='${SHELL} /home/igv/Hepforge/npstat/trunk/missing --run autoheader'
AUTOMAKE='${SHELL} /home/igv/Hepforge/npstat/trunk/missing --run automake-1.12'
AWK='gawk'
CC='gcc'
CCDEPMODE='depmode=gcc3'
CFLAGS='-g -O2'
CPP='gcc -E'
CPPFLAGS=''
CXX='g++'
CXXCPP='g++ -E'
CXXDEPMODE='depmode=gcc3'
CXXFLAGS='-std=c++0x -g -O0 -Wall -W -Werror'
CYGPATH_W='echo'
DEFS='-DPACKAGE_NAME=\"npstat\" -DPACKAGE_TARNAME=\"npstat\" -DPACKAGE_VERSION=\"2.2.0\" -DPACKAGE_STRING=\"npstat\ 2.2.0\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\" -DPACKAGE=\"npstat\" -DVERSION=\"2.2.0\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\"'
DEPDIR='.deps'
DEPS_CFLAGS='-I/usr/local/include '
DEPS_LIBS='-L/usr/local/lib -lfftw3 -lm -lgeners '
DLLTOOL='false'
DSYMUTIL=''
DUMPBIN=''
ECHO_C=''
ECHO_N='-n'
ECHO_T=''
EGREP='/bin/grep -E'
EXEEXT=''
F77='g77'
FFLAGS='-g -O2'
FGREP='/bin/grep -F'
FLIBS=' -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../.. -lgfortran -lm -lquadmath'
GREP='/bin/grep'
INSTALL_DATA='${INSTALL} -m 644'
INSTALL_PROGRAM='${INSTALL}'
INSTALL_SCRIPT='${INSTALL}'
INSTALL_STRIP_PROGRAM='$(install_sh) -c -s'
LD='/bin/ld -m elf_x86_64'
LDFLAGS=''
LIBOBJS=''
LIBS=''
LIBTOOL='$(SHELL) $(top_builddir)/libtool'
LIPO=''
LN_S='ln -s'
LTLIBOBJS=''
MAKEINFO='${SHELL} /home/igv/Hepforge/npstat/trunk/missing --run makeinfo'
MANIFEST_TOOL=':'
MKDIR_P='/bin/mkdir -p'
NM='/bin/nm -B'
NMEDIT=''
OBJDUMP='objdump'
OBJEXT='o'
OTOOL64=''
OTOOL=''
PACKAGE='npstat'
PACKAGE_BUGREPORT=''
PACKAGE_NAME='npstat'
PACKAGE_STRING='npstat 2.2.0'
PACKAGE_TARNAME='npstat'
PACKAGE_URL=''
PACKAGE_VERSION='2.2.0'
PATH_SEPARATOR=':'
PKG_CONFIG='/bin/pkg-config'
PKG_CONFIG_LIBDIR=''
PKG_CONFIG_PATH='/usr/local/lib/pkgconfig'
RANLIB='ranlib'
SED='/bin/sed'
SET_MAKE=''
SHELL='/bin/sh'
STRIP='strip'
VERSION='2.2.0'
ac_ct_AR='ar'
ac_ct_CC='gcc'
ac_ct_CXX='g++'
ac_ct_DUMPBIN=''
ac_ct_F77='g77'
am__EXEEXT_FALSE=''
am__EXEEXT_TRUE='#'
am__fastdepCC_FALSE='#'
am__fastdepCC_TRUE=''
am__fastdepCXX_FALSE='#'
am__fastdepCXX_TRUE=''
am__include='include'
am__isrc=''
am__leading_dot='.'
am__nodep='_no'
am__quote=''
am__tar='$${TAR-tar} chof - "$$tardir"'
am__untar='$${TAR-tar} xf -'
bindir='${exec_prefix}/bin'
build='x86_64-unknown-linux-gnu'
build_alias=''
build_cpu='x86_64'
build_os='linux-gnu'
build_vendor='unknown'
datadir='${datarootdir}'
datarootdir='${prefix}/share'
docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
dvidir='${docdir}'
exec_prefix='${prefix}'
host='x86_64-unknown-linux-gnu'
host_alias=''
host_cpu='x86_64'
host_os='linux-gnu'
host_vendor='unknown'
htmldir='${docdir}'
includedir='${prefix}/include'
infodir='${datarootdir}/info'
install_sh='${SHELL} /home/igv/Hepforge/npstat/trunk/install-sh'
libdir='${exec_prefix}/lib'
libexecdir='${exec_prefix}/libexec'
localedir='${datarootdir}/locale'
localstatedir='${prefix}/var'
mandir='${datarootdir}/man'
mkdir_p='$(MKDIR_P)'
oldincludedir='/usr/include'
pdfdir='${docdir}'
prefix='/usr/local'
program_transform_name='s,x,x,'
psdir='${docdir}'
sbindir='${exec_prefix}/sbin'
sharedstatedir='${prefix}/com'
sysconfdir='${prefix}/etc'
target_alias=''
## ----------- ##
## confdefs.h. ##
## ----------- ##
/* confdefs.h */
#define PACKAGE_NAME "npstat"
#define PACKAGE_TARNAME "npstat"
#define PACKAGE_VERSION "2.2.0"
#define PACKAGE_STRING "npstat 2.2.0"
#define PACKAGE_BUGREPORT ""
#define PACKAGE_URL ""
#define PACKAGE "npstat"
#define VERSION "2.2.0"
#define STDC_HEADERS 1
#define HAVE_SYS_TYPES_H 1
#define HAVE_SYS_STAT_H 1
#define HAVE_STDLIB_H 1
#define HAVE_STRING_H 1
#define HAVE_MEMORY_H 1
#define HAVE_STRINGS_H 1
#define HAVE_INTTYPES_H 1
#define HAVE_STDINT_H 1
#define HAVE_UNISTD_H 1
#define HAVE_DLFCN_H 1
#define LT_OBJDIR ".libs/"
configure: exit 0
Index: trunk/autom4te.cache/requests
===================================================================
--- trunk/autom4te.cache/requests (revision 101)
+++ trunk/autom4te.cache/requests (revision 102)
@@ -1,1470 +1,1470 @@
# This file was generated.
# It contains the lists of macros which have been traced.
# It can be safely removed.
@request = (
bless( [
'0',
1,
[
'/usr/share/autoconf'
],
[
'/usr/share/autoconf/autoconf/autoconf.m4f',
'/usr/share/aclocal/libtool.m4',
'/usr/share/aclocal/pkg.m4',
'/usr/share/aclocal-1.9/amversion.m4',
'/usr/share/aclocal-1.9/auxdir.m4',
'/usr/share/aclocal-1.9/cond.m4',
'/usr/share/aclocal-1.9/depend.m4',
'/usr/share/aclocal-1.9/depout.m4',
'/usr/share/aclocal-1.9/init.m4',
'/usr/share/aclocal-1.9/install-sh.m4',
'/usr/share/aclocal-1.9/lead-dot.m4',
'/usr/share/aclocal-1.9/make.m4',
'/usr/share/aclocal-1.9/missing.m4',
'/usr/share/aclocal-1.9/mkdirp.m4',
'/usr/share/aclocal-1.9/options.m4',
'/usr/share/aclocal-1.9/runlog.m4',
'/usr/share/aclocal-1.9/sanity.m4',
'/usr/share/aclocal-1.9/strip.m4',
'/usr/share/aclocal-1.9/tar.m4',
'configure.ac'
],
{
'AM_ENABLE_STATIC' => 1,
'AC_LIBTOOL_LANG_RC_CONFIG' => 1,
- 'AC_TYPE_OFF_T' => 1,
'AC_C_VOLATILE' => 1,
- 'AC_FUNC_CLOSEDIR_VOID' => 1,
+ 'AC_TYPE_OFF_T' => 1,
'_LT_AC_SHELL_INIT' => 1,
+ 'AC_FUNC_CLOSEDIR_VOID' => 1,
'AC_REPLACE_FNMATCH' => 1,
'AC_DEFUN' => 1,
- '_LT_AC_LANG_CXX_CONFIG' => 1,
'AC_PROG_LIBTOOL' => 1,
- 'AC_FUNC_STAT' => 1,
+ '_LT_AC_LANG_CXX_CONFIG' => 1,
'AM_PROG_MKDIR_P' => 1,
+ 'AC_FUNC_STAT' => 1,
'AC_FUNC_WAIT3' => 1,
- 'AC_STRUCT_TM' => 1,
- 'AC_FUNC_LSTAT' => 1,
'AM_AUTOMAKE_VERSION' => 1,
- 'AC_FUNC_STRTOD' => 1,
+ 'AC_FUNC_LSTAT' => 1,
+ 'AC_STRUCT_TM' => 1,
'AC_CHECK_HEADERS' => 1,
+ 'AC_FUNC_STRTOD' => 1,
'AM_MISSING_PROG' => 1,
'AC_FUNC_STRNLEN' => 1,
'AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH' => 1,
'AC_PROG_CXX' => 1,
'_LT_AC_LANG_C_CONFIG' => 1,
'AC_FUNC_LSTAT_FOLLOWS_SLASHED_SYMLINK' => 1,
'AM_PROG_INSTALL_STRIP' => 1,
'AC_PROG_AWK' => 1,
'_m4_warn' => 1,
'AC_LIBTOOL_OBJDIR' => 1,
'AC_HEADER_MAJOR' => 1,
'AM_SANITY_CHECK' => 1,
'AC_LIBTOOL_PROG_COMPILER_PIC' => 1,
'AC_LIBTOOL_LANG_GCJ_CONFIG' => 1,
- '_LT_AC_CHECK_DLFCN' => 1,
'AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE' => 1,
+ '_LT_AC_CHECK_DLFCN' => 1,
'_AM_PROG_TAR' => 1,
'AC_LIBTOOL_GCJ' => 1,
'AC_PROG_GCC_TRADITIONAL' => 1,
'AC_LIBSOURCE' => 1,
'AC_STRUCT_ST_BLOCKS' => 1,
- 'AC_LIBTOOL_CONFIG' => 1,
'_LT_AC_LANG_F77' => 1,
- 'AC_CONFIG_AUX_DIR' => 1,
+ 'AC_LIBTOOL_CONFIG' => 1,
'AC_PROG_MAKE_SET' => 1,
+ 'AC_CONFIG_AUX_DIR' => 1,
'sinclude' => 1,
'AM_DISABLE_SHARED' => 1,
- 'AM_PROG_LIBTOOL' => 1,
'_LT_AC_LANG_CXX' => 1,
- 'AM_PROG_LD' => 1,
+ 'AM_PROG_LIBTOOL' => 1,
'_LT_AC_FILE_LTDLL_C' => 1,
+ 'AM_PROG_LD' => 1,
'AC_FUNC_STRERROR_R' => 1,
- 'AC_FUNC_FORK' => 1,
'AC_DECL_SYS_SIGLIST' => 1,
- 'AC_FUNC_VPRINTF' => 1,
+ 'AC_FUNC_FORK' => 1,
'AU_DEFUN' => 1,
+ 'AC_FUNC_VPRINTF' => 1,
'AC_PROG_NM' => 1,
'AC_LIBTOOL_DLOPEN' => 1,
'AC_PROG_LD' => 1,
'AC_PROG_LD_GNU' => 1,
'AC_ENABLE_FAST_INSTALL' => 1,
'AC_INIT' => 1,
'AC_STRUCT_TIMEZONE' => 1,
'AC_SUBST' => 1,
'AC_FUNC_ALLOCA' => 1,
'_AM_SET_OPTION' => 1,
'AC_CANONICAL_HOST' => 1,
'_LT_LINKER_BOILERPLATE' => 1,
- 'AC_PROG_RANLIB' => 1,
- 'AC_LIBTOOL_LANG_CXX_CONFIG' => 1,
'AC_LIBTOOL_PROG_CC_C_O' => 1,
+ 'AC_LIBTOOL_LANG_CXX_CONFIG' => 1,
+ 'AC_PROG_RANLIB' => 1,
'AC_FUNC_SETPGRP' => 1,
'AC_CONFIG_SUBDIRS' => 1,
'AC_FUNC_MMAP' => 1,
'AC_TYPE_SIZE_T' => 1,
'AC_CHECK_TYPES' => 1,
'AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
'AC_CHECK_MEMBERS' => 1,
'AC_DEFUN_ONCE' => 1,
'AC_FUNC_UTIME_NULL' => 1,
'AC_FUNC_SELECT_ARGTYPES' => 1,
'_LT_AC_LANG_GCJ' => 1,
- 'AC_HEADER_STAT' => 1,
'AC_FUNC_STRFTIME' => 1,
+ 'AC_HEADER_STAT' => 1,
'AC_C_INLINE' => 1,
'AC_LIBTOOL_RC' => 1,
- 'AC_DISABLE_FAST_INSTALL' => 1,
'_LT_AC_PROG_ECHO_BACKSLASH' => 1,
+ 'AC_DISABLE_FAST_INSTALL' => 1,
'AC_CONFIG_FILES' => 1,
- 'include' => 1,
- '_LT_AC_SYS_LIBPATH_AIX' => 1,
'_LT_AC_TRY_DLOPEN_SELF' => 1,
+ '_LT_AC_SYS_LIBPATH_AIX' => 1,
+ 'include' => 1,
'LT_AC_PROG_SED' => 1,
'AM_ENABLE_SHARED' => 1,
'AM_GNU_GETTEXT' => 1,
'_LT_AC_LANG_GCJ_CONFIG' => 1,
- 'AC_FUNC_OBSTACK' => 1,
- 'AC_CHECK_LIB' => 1,
'AC_ENABLE_SHARED' => 1,
+ 'AC_CHECK_LIB' => 1,
+ 'AC_FUNC_OBSTACK' => 1,
'AC_FUNC_MALLOC' => 1,
'AC_FUNC_GETGROUPS' => 1,
'AC_FUNC_GETLOADAVG' => 1,
- 'AC_FUNC_FSEEKO' => 1,
- 'AC_LIBTOOL_SYS_HARD_LINK_LOCKS' => 1,
'AC_ENABLE_STATIC' => 1,
- 'AM_PROG_CC_C_O' => 1,
+ 'AC_LIBTOOL_SYS_HARD_LINK_LOCKS' => 1,
+ 'AC_FUNC_FSEEKO' => 1,
'_LT_AC_TAGVAR' => 1,
+ 'AM_PROG_CC_C_O' => 1,
'AC_LIBTOOL_LANG_F77_CONFIG' => 1,
- 'AC_FUNC_MKTIME' => 1,
'AM_CONDITIONAL' => 1,
+ 'AC_FUNC_MKTIME' => 1,
'AC_HEADER_SYS_WAIT' => 1,
- 'AC_PROG_LN_S' => 1,
'AC_FUNC_MEMCMP' => 1,
- 'm4_include' => 1,
+ 'AC_PROG_LN_S' => 1,
'AM_PROG_INSTALL_SH' => 1,
- 'AC_HEADER_DIRENT' => 1,
+ 'm4_include' => 1,
'AC_PROG_EGREP' => 1,
- '_AC_AM_CONFIG_HEADER_HOOK' => 1,
+ 'AC_HEADER_DIRENT' => 1,
'AC_PATH_MAGIC' => 1,
+ '_AC_AM_CONFIG_HEADER_HOOK' => 1,
'AM_MAKE_INCLUDE' => 1,
'_LT_AC_TAGCONFIG' => 1,
'm4_pattern_forbid' => 1,
'AC_CONFIG_LIBOBJ_DIR' => 1,
'AC_LIBTOOL_COMPILER_OPTION' => 1,
'AC_DISABLE_SHARED' => 1,
'_LT_COMPILER_BOILERPLATE' => 1,
'AC_LIBTOOL_SETUP' => 1,
'AC_LIBTOOL_WIN32_DLL' => 1,
'AC_PROG_LD_RELOAD_FLAG' => 1,
'AC_HEADER_TIME' => 1,
- 'AC_TYPE_MODE_T' => 1,
- 'AC_FUNC_GETMNTENT' => 1,
'AM_MISSING_HAS_RUN' => 1,
- 'm4_sinclude' => 1,
+ 'AC_FUNC_GETMNTENT' => 1,
+ 'AC_TYPE_MODE_T' => 1,
'AC_LIBTOOL_DLOPEN_SELF' => 1,
+ 'm4_sinclude' => 1,
'AC_PATH_X' => 1,
'AC_LIBTOOL_PROG_LD_SHLIBS' => 1,
'_PKG_SHORT_ERRORS_SUPPORTED' => 1,
'AC_HEADER_STDC' => 1,
'AC_LIBTOOL_LINKER_OPTION' => 1,
'PKG_CHECK_EXISTS' => 1,
- 'LT_AC_PROG_RC' => 1,
'AC_LIBTOOL_CXX' => 1,
+ 'LT_AC_PROG_RC' => 1,
'LT_AC_PROG_GCJ' => 1,
'AC_FUNC_ERROR_AT_LINE' => 1,
- 'AM_DEP_TRACK' => 1,
- '_LT_AC_PROG_CXXCPP' => 1,
'AM_DISABLE_STATIC' => 1,
- 'AC_FUNC_MBRTOWC' => 1,
+ '_LT_AC_PROG_CXXCPP' => 1,
+ 'AM_DEP_TRACK' => 1,
'_AC_PROG_LIBTOOL' => 1,
+ 'AC_FUNC_MBRTOWC' => 1,
'AC_TYPE_SIGNAL' => 1,
- 'AC_TYPE_UID_T' => 1,
'_AM_IF_OPTION' => 1,
+ 'AC_TYPE_UID_T' => 1,
'AC_PATH_TOOL_PREFIX' => 1,
- 'AC_LIBTOOL_F77' => 1,
'm4_pattern_allow' => 1,
+ 'AC_LIBTOOL_F77' => 1,
'AM_SET_LEADING_DOT' => 1,
'AC_DEFINE_TRACE_LITERAL' => 1,
'_AM_DEPENDENCIES' => 1,
'AC_LIBTOOL_LANG_C_CONFIG' => 1,
- 'AC_PROG_CC' => 1,
'_LT_AC_SYS_COMPILER' => 1,
+ 'AC_PROG_CC' => 1,
'AM_PROG_NM' => 1,
'PKG_CHECK_MODULES' => 1,
'AC_FUNC_STRCOLL' => 1,
'AC_PROG_YACC' => 1,
'AC_LIBLTDL_CONVENIENCE' => 1,
'AC_DEPLIBS_CHECK_METHOD' => 1,
- 'AM_SET_CURRENT_AUTOMAKE_VERSION' => 1,
- 'AC_LIBLTDL_INSTALLABLE' => 1,
'AC_FUNC_CHOWN' => 1,
+ 'AC_LIBLTDL_INSTALLABLE' => 1,
+ 'AM_SET_CURRENT_AUTOMAKE_VERSION' => 1,
'AC_LIBTOOL_SYS_DYNAMIC_LINKER' => 1,
'AC_FUNC_GETPGRP' => 1,
'AM_INIT_AUTOMAKE' => 1,
'AC_FUNC_REALLOC' => 1,
'AC_DISABLE_STATIC' => 1,
'AC_CONFIG_LINKS' => 1,
'AM_MAINTAINER_MODE' => 1,
'_LT_AC_LOCK' => 1,
'_LT_AC_LANG_RC_CONFIG' => 1,
'AC_PROG_CPP' => 1,
- 'AC_TYPE_PID_T' => 1,
- 'AC_PROG_LEX' => 1,
'AC_C_CONST' => 1,
+ 'AC_PROG_LEX' => 1,
+ 'AC_TYPE_PID_T' => 1,
'AC_LIBTOOL_POSTDEP_PREDEP' => 1,
'AC_FUNC_SETVBUF_REVERSED' => 1,
- 'AC_PROG_INSTALL' => 1,
'AM_AUX_DIR_EXPAND' => 1,
- 'AC_LIBTOOL_PROG_COMPILER_NO_RTTI' => 1,
+ 'AC_PROG_INSTALL' => 1,
'_LT_AC_LANG_F77_CONFIG' => 1,
+ 'AC_LIBTOOL_PROG_COMPILER_NO_RTTI' => 1,
'_AM_SET_OPTIONS' => 1,
- 'AM_RUN_LOG' => 1,
'_AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
+ 'AM_RUN_LOG' => 1,
'AC_LIBTOOL_PICMODE' => 1,
'AH_OUTPUT' => 1,
'AC_CHECK_LIBM' => 1,
'AC_LIBTOOL_SYS_LIB_STRIP' => 1,
'_AM_MANGLE_OPTION' => 1,
'AC_CANONICAL_SYSTEM' => 1,
- 'AC_CONFIG_HEADERS' => 1,
'AC_LIBTOOL_SYS_MAX_CMD_LEN' => 1,
+ 'AC_CONFIG_HEADERS' => 1,
'AM_SET_DEPDIR' => 1,
- '_LT_CC_BASENAME' => 1,
'PKG_PROG_PKG_CONFIG' => 1,
+ '_LT_CC_BASENAME' => 1,
'AC_CHECK_FUNCS' => 1
}
], 'Autom4te::Request' ),
bless( [
'1',
1,
[
'/usr/share/autoconf'
],
[
'/usr/share/autoconf/autoconf/autoconf.m4f',
'aclocal.m4',
'configure.ac'
],
{
- '_LT_AC_TAGCONFIG' => 1,
'AM_PROG_F77_C_O' => 1,
+ '_LT_AC_TAGCONFIG' => 1,
'm4_pattern_forbid' => 1,
'AC_CANONICAL_TARGET' => 1,
'AC_CONFIG_LIBOBJ_DIR' => 1,
- 'AC_C_VOLATILE' => 1,
'AC_TYPE_OFF_T' => 1,
+ 'AC_C_VOLATILE' => 1,
'AC_FUNC_CLOSEDIR_VOID' => 1,
'AC_REPLACE_FNMATCH' => 1,
'AC_PROG_LIBTOOL' => 1,
- 'AC_FUNC_STAT' => 1,
'AM_PROG_MKDIR_P' => 1,
- 'AC_FUNC_WAIT3' => 1,
+ 'AC_FUNC_STAT' => 1,
'AC_HEADER_TIME' => 1,
- 'AC_FUNC_LSTAT' => 1,
- 'AC_STRUCT_TM' => 1,
+ 'AC_FUNC_WAIT3' => 1,
'AM_AUTOMAKE_VERSION' => 1,
- 'AC_FUNC_GETMNTENT' => 1,
+ 'AC_STRUCT_TM' => 1,
+ 'AC_FUNC_LSTAT' => 1,
'AC_TYPE_MODE_T' => 1,
- 'AC_FUNC_STRTOD' => 1,
+ 'AC_FUNC_GETMNTENT' => 1,
'AC_CHECK_HEADERS' => 1,
+ 'AC_FUNC_STRTOD' => 1,
'LT_CONFIG_LTDL_DIR' => 1,
'AC_FUNC_STRNLEN' => 1,
'm4_sinclude' => 1,
'AC_PROG_CXX' => 1,
'AC_PATH_X' => 1,
'AM_NLS' => 1,
'AC_FUNC_LSTAT_FOLLOWS_SLASHED_SYMLINK' => 1,
'AC_PROG_AWK' => 1,
'_m4_warn' => 1,
'AC_HEADER_STDC' => 1,
'AC_HEADER_MAJOR' => 1,
'AM_PROG_CXX_C_O' => 1,
'_AM_MAKEFILE_INCLUDE' => 1,
'AM_PROG_MOC' => 1,
'LT_INIT' => 1,
'AC_FUNC_ERROR_AT_LINE' => 1,
'AC_PROG_GCC_TRADITIONAL' => 1,
'AC_LIBSOURCE' => 1,
'AC_FUNC_MBRTOWC' => 1,
'AC_STRUCT_ST_BLOCKS' => 1,
- 'AC_CANONICAL_BUILD' => 1,
- 'AC_TYPE_SIGNAL' => 1,
'AM_PROG_FC_C_O' => 1,
+ 'AC_TYPE_SIGNAL' => 1,
+ 'AC_CANONICAL_BUILD' => 1,
'AC_TYPE_UID_T' => 1,
- 'AC_PROG_MAKE_SET' => 1,
- 'AC_CONFIG_AUX_DIR' => 1,
'_AM_SUBST_NOTMAKE' => 1,
- 'm4_pattern_allow' => 1,
+ 'AC_CONFIG_AUX_DIR' => 1,
+ 'AC_PROG_MAKE_SET' => 1,
'sinclude' => 1,
+ 'm4_pattern_allow' => 1,
'AC_DEFINE_TRACE_LITERAL' => 1,
'AC_FUNC_STRERROR_R' => 1,
'AC_PROG_CC' => 1,
- 'AC_DECL_SYS_SIGLIST' => 1,
- 'AC_FUNC_FORK' => 1,
'_AM_COND_ELSE' => 1,
- 'AC_FUNC_STRCOLL' => 1,
+ 'AC_FUNC_FORK' => 1,
+ 'AC_DECL_SYS_SIGLIST' => 1,
'AC_FUNC_VPRINTF' => 1,
+ 'AC_FUNC_STRCOLL' => 1,
'AC_PROG_YACC' => 1,
'AC_SUBST_TRACE' => 1,
'AC_INIT' => 1,
'AC_STRUCT_TIMEZONE' => 1,
'_AM_COND_IF' => 1,
'AC_FUNC_CHOWN' => 1,
- 'AC_FUNC_ALLOCA' => 1,
'AC_SUBST' => 1,
- 'AC_CANONICAL_HOST' => 1,
- 'AC_FC_SRCEXT' => 1,
+ 'AC_FUNC_ALLOCA' => 1,
'AC_FUNC_GETPGRP' => 1,
+ 'AC_FC_SRCEXT' => 1,
+ 'AC_CANONICAL_HOST' => 1,
'AC_PROG_RANLIB' => 1,
- 'AC_FUNC_SETPGRP' => 1,
'AM_INIT_AUTOMAKE' => 1,
- 'AC_CONFIG_SUBDIRS' => 1,
+ 'AC_FUNC_SETPGRP' => 1,
'AM_PATH_GUILE' => 1,
+ 'AC_CONFIG_SUBDIRS' => 1,
'AC_FUNC_MMAP' => 1,
'AC_FUNC_REALLOC' => 1,
'AC_TYPE_SIZE_T' => 1,
- 'AC_CHECK_TYPES' => 1,
- 'AC_CONFIG_LINKS' => 1,
'AC_REQUIRE_AUX_FILE' => 1,
+ 'AC_CONFIG_LINKS' => 1,
+ 'AC_CHECK_TYPES' => 1,
'LT_SUPPORTED_TAG' => 1,
'AC_CHECK_MEMBERS' => 1,
'AM_MAINTAINER_MODE' => 1,
'AC_FUNC_UTIME_NULL' => 1,
- 'AC_FC_PP_DEFINE' => 1,
'AC_FUNC_SELECT_ARGTYPES' => 1,
+ 'AC_FC_PP_DEFINE' => 1,
'AM_GNU_GETTEXT_INTL_SUBDIR' => 1,
'AM_MAKEFILE_INCLUDE' => 1,
- 'AC_HEADER_STAT' => 1,
'AC_FUNC_STRFTIME' => 1,
- 'AC_PROG_CPP' => 1,
- 'AC_C_INLINE' => 1,
+ 'AC_HEADER_STAT' => 1,
'_AM_COND_ENDIF' => 1,
- 'AM_ENABLE_MULTILIB' => 1,
- 'AC_PROG_LEX' => 1,
- 'AC_C_CONST' => 1,
+ 'AC_C_INLINE' => 1,
+ 'AC_PROG_CPP' => 1,
'AC_TYPE_PID_T' => 1,
+ 'AC_C_CONST' => 1,
+ 'AC_PROG_LEX' => 1,
+ 'AM_ENABLE_MULTILIB' => 1,
'AM_SILENT_RULES' => 1,
'AC_CONFIG_FILES' => 1,
'include' => 1,
'AC_FUNC_SETVBUF_REVERSED' => 1,
'AC_PROG_INSTALL' => 1,
- 'AM_GNU_GETTEXT' => 1,
'AM_PROG_AR' => 1,
- 'AC_CHECK_LIB' => 1,
+ 'AM_GNU_GETTEXT' => 1,
'AC_FUNC_OBSTACK' => 1,
+ 'AC_CHECK_LIB' => 1,
'AC_FUNC_MALLOC' => 1,
'AC_FUNC_GETGROUPS' => 1,
- 'AC_FUNC_GETLOADAVG' => 1,
'AC_FC_FREEFORM' => 1,
- 'AC_FC_PP_SRCEXT' => 1,
+ 'AC_FUNC_GETLOADAVG' => 1,
'AH_OUTPUT' => 1,
+ 'AC_FC_PP_SRCEXT' => 1,
'AC_FUNC_FSEEKO' => 1,
'AM_PROG_CC_C_O' => 1,
- 'AC_FUNC_MKTIME' => 1,
- 'AC_CANONICAL_SYSTEM' => 1,
- 'AM_CONDITIONAL' => 1,
'AM_XGETTEXT_OPTION' => 1,
+ 'AM_CONDITIONAL' => 1,
+ 'AC_CANONICAL_SYSTEM' => 1,
+ 'AC_FUNC_MKTIME' => 1,
'AC_CONFIG_HEADERS' => 1,
- 'AC_HEADER_SYS_WAIT' => 1,
'AM_POT_TOOLS' => 1,
- 'AC_PROG_LN_S' => 1,
+ 'AC_HEADER_SYS_WAIT' => 1,
'AC_FUNC_MEMCMP' => 1,
+ 'AC_PROG_LN_S' => 1,
'm4_include' => 1,
'AC_HEADER_DIRENT' => 1,
'AC_CHECK_FUNCS' => 1
}
], 'Autom4te::Request' ),
bless( [
'2',
1,
[
'/usr/share/autoconf'
],
[
'/usr/share/autoconf/autoconf/autoconf.m4f',
'/usr/share/aclocal/argz.m4',
'/usr/share/aclocal/libtool.m4',
'/usr/share/aclocal/ltdl.m4',
'/usr/share/aclocal/ltoptions.m4',
'/usr/share/aclocal/ltsugar.m4',
'/usr/share/aclocal/ltversion.m4',
'/usr/share/aclocal/lt~obsolete.m4',
'/usr/share/aclocal/pkg.m4',
'/usr/share/aclocal-1.11/amversion.m4',
'/usr/share/aclocal-1.11/auxdir.m4',
'/usr/share/aclocal-1.11/cond.m4',
'/usr/share/aclocal-1.11/depend.m4',
'/usr/share/aclocal-1.11/depout.m4',
'/usr/share/aclocal-1.11/init.m4',
'/usr/share/aclocal-1.11/install-sh.m4',
'/usr/share/aclocal-1.11/lead-dot.m4',
'/usr/share/aclocal-1.11/make.m4',
'/usr/share/aclocal-1.11/missing.m4',
'/usr/share/aclocal-1.11/mkdirp.m4',
'/usr/share/aclocal-1.11/options.m4',
'/usr/share/aclocal-1.11/runlog.m4',
'/usr/share/aclocal-1.11/sanity.m4',
'/usr/share/aclocal-1.11/silent.m4',
'/usr/share/aclocal-1.11/strip.m4',
'/usr/share/aclocal-1.11/substnot.m4',
'/usr/share/aclocal-1.11/tar.m4',
'configure.ac'
],
{
'AM_ENABLE_STATIC' => 1,
'AC_LIBTOOL_LANG_RC_CONFIG' => 1,
'_LT_AC_SHELL_INIT' => 1,
'AC_DEFUN' => 1,
'AC_PROG_LIBTOOL' => 1,
'_LT_AC_LANG_CXX_CONFIG' => 1,
'AM_PROG_MKDIR_P' => 1,
'AM_AUTOMAKE_VERSION' => 1,
'AM_SUBST_NOTMAKE' => 1,
'AM_MISSING_PROG' => 1,
'AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH' => 1,
'_LT_AC_LANG_C_CONFIG' => 1,
'AM_PROG_INSTALL_STRIP' => 1,
'_m4_warn' => 1,
'AC_LIBTOOL_OBJDIR' => 1,
'gl_FUNC_ARGZ' => 1,
- 'AM_SANITY_CHECK' => 1,
'LTOBSOLETE_VERSION' => 1,
+ 'AM_SANITY_CHECK' => 1,
'AC_LIBTOOL_LANG_GCJ_CONFIG' => 1,
'AC_LIBTOOL_PROG_COMPILER_PIC' => 1,
'LT_LIB_M' => 1,
'_LT_AC_CHECK_DLFCN' => 1,
'AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE' => 1,
'LTSUGAR_VERSION' => 1,
'_LT_PROG_LTMAIN' => 1,
'LT_SYS_SYMBOL_USCORE' => 1,
'_AM_PROG_TAR' => 1,
'AC_LIBTOOL_GCJ' => 1,
'_LT_WITH_SYSROOT' => 1,
'LT_SYS_DLOPEN_DEPLIBS' => 1,
'LT_FUNC_DLSYM_USCORE' => 1,
- 'AC_LIBTOOL_CONFIG' => 1,
'_LT_AC_LANG_F77' => 1,
- 'AC_LTDL_DLLIB' => 1,
+ 'AC_LIBTOOL_CONFIG' => 1,
'_AM_SUBST_NOTMAKE' => 1,
+ 'AC_LTDL_DLLIB' => 1,
'_AM_AUTOCONF_VERSION' => 1,
'AM_DISABLE_SHARED' => 1,
'_LT_PROG_ECHO_BACKSLASH' => 1,
'_LTDL_SETUP' => 1,
- 'AM_PROG_LIBTOOL' => 1,
'_LT_AC_LANG_CXX' => 1,
- 'AM_PROG_LD' => 1,
- '_LT_AC_FILE_LTDLL_C' => 1,
+ 'AM_PROG_LIBTOOL' => 1,
'AC_LIB_LTDL' => 1,
+ '_LT_AC_FILE_LTDLL_C' => 1,
+ 'AM_PROG_LD' => 1,
'AU_DEFUN' => 1,
'AC_PROG_NM' => 1,
'AC_LIBTOOL_DLOPEN' => 1,
'AC_PROG_LD' => 1,
'AC_PROG_LD_GNU' => 1,
'AC_ENABLE_FAST_INSTALL' => 1,
'AC_LIBTOOL_FC' => 1,
'LTDL_CONVENIENCE' => 1,
'_AM_SET_OPTION' => 1,
'AC_LTDL_PREOPEN' => 1,
'_LT_LINKER_BOILERPLATE' => 1,
'_LT_PREPARE_SED_QUOTE_VARS' => 1,
'AC_LIBTOOL_LANG_CXX_CONFIG' => 1,
'AC_LIBTOOL_PROG_CC_C_O' => 1,
'gl_PREREQ_ARGZ' => 1,
'LT_SUPPORTED_TAG' => 1,
'AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
'LT_PROG_RC' => 1,
'LT_SYS_MODULE_EXT' => 1,
'AC_DEFUN_ONCE' => 1,
'_LT_AC_LANG_GCJ' => 1,
'AC_LTDL_OBJDIR' => 1,
'_LT_PATH_TOOL_PREFIX' => 1,
'AC_LIBTOOL_RC' => 1,
- '_LT_AC_PROG_ECHO_BACKSLASH' => 1,
- 'AC_DISABLE_FAST_INSTALL' => 1,
'AM_SILENT_RULES' => 1,
- 'include' => 1,
- '_LT_AC_TRY_DLOPEN_SELF' => 1,
+ 'AC_DISABLE_FAST_INSTALL' => 1,
+ '_LT_AC_PROG_ECHO_BACKSLASH' => 1,
'_LT_AC_SYS_LIBPATH_AIX' => 1,
+ '_LT_AC_TRY_DLOPEN_SELF' => 1,
+ 'include' => 1,
'LT_AC_PROG_SED' => 1,
'AM_ENABLE_SHARED' => 1,
'LTDL_INSTALLABLE' => 1,
'_LT_AC_LANG_GCJ_CONFIG' => 1,
'AC_ENABLE_SHARED' => 1,
- 'AC_LIBTOOL_SYS_HARD_LINK_LOCKS' => 1,
- 'AC_ENABLE_STATIC' => 1,
'_LT_REQUIRED_DARWIN_CHECKS' => 1,
+ 'AC_ENABLE_STATIC' => 1,
+ 'AC_LIBTOOL_SYS_HARD_LINK_LOCKS' => 1,
'_LT_AC_TAGVAR' => 1,
'AC_LIBTOOL_LANG_F77_CONFIG' => 1,
'AM_CONDITIONAL' => 1,
'LT_LIB_DLLOAD' => 1,
- 'LTVERSION_VERSION' => 1,
- 'LTDL_INIT' => 1,
- '_LT_PROG_F77' => 1,
'_LT_PROG_CXX' => 1,
- 'm4_include' => 1,
+ '_LT_PROG_F77' => 1,
+ 'LTDL_INIT' => 1,
+ 'LTVERSION_VERSION' => 1,
'AM_PROG_INSTALL_SH' => 1,
+ 'm4_include' => 1,
'AC_PROG_EGREP' => 1,
- 'AC_PATH_MAGIC' => 1,
'_AC_AM_CONFIG_HEADER_HOOK' => 1,
+ 'AC_PATH_MAGIC' => 1,
'AC_LTDL_SYSSEARCHPATH' => 1,
'AM_MAKE_INCLUDE' => 1,
'LT_CMD_MAX_LEN' => 1,
'_LT_AC_TAGCONFIG' => 1,
'm4_pattern_forbid' => 1,
'_LT_LINKER_OPTION' => 1,
'AC_LIBTOOL_COMPILER_OPTION' => 1,
'AC_DISABLE_SHARED' => 1,
'_LT_COMPILER_BOILERPLATE' => 1,
'AC_LIBTOOL_WIN32_DLL' => 1,
'AC_LIBTOOL_SETUP' => 1,
'AC_PROG_LD_RELOAD_FLAG' => 1,
'AC_LTDL_DLSYM_USCORE' => 1,
'AM_MISSING_HAS_RUN' => 1,
'LT_LANG' => 1,
'LT_SYS_DLSEARCH_PATH' => 1,
'LT_CONFIG_LTDL_DIR' => 1,
'AC_LIBTOOL_DLOPEN_SELF' => 1,
'LT_OUTPUT' => 1,
'AC_LIBTOOL_PROG_LD_SHLIBS' => 1,
'_PKG_SHORT_ERRORS_SUPPORTED' => 1,
'AC_WITH_LTDL' => 1,
'AC_LIBTOOL_LINKER_OPTION' => 1,
'PKG_CHECK_EXISTS' => 1,
'LT_AC_PROG_RC' => 1,
'AC_LIBTOOL_CXX' => 1,
'LT_INIT' => 1,
'LT_AC_PROG_GCJ' => 1,
'LT_SYS_DLOPEN_SELF' => 1,
'_LT_AC_PROG_CXXCPP' => 1,
'AM_DEP_TRACK' => 1,
'AM_DISABLE_STATIC' => 1,
'_AC_PROG_LIBTOOL' => 1,
'_AM_IF_OPTION' => 1,
'AC_PATH_TOOL_PREFIX' => 1,
- 'AC_LIBTOOL_F77' => 1,
'm4_pattern_allow' => 1,
+ 'AC_LIBTOOL_F77' => 1,
'AM_SET_LEADING_DOT' => 1,
- 'LT_AC_PROG_EGREP' => 1,
'_LT_PROG_FC' => 1,
+ 'LT_AC_PROG_EGREP' => 1,
'_AM_DEPENDENCIES' => 1,
'AC_LIBTOOL_LANG_C_CONFIG' => 1,
'LTOPTIONS_VERSION' => 1,
'_LT_AC_SYS_COMPILER' => 1,
'AM_PROG_NM' => 1,
'PKG_CHECK_MODULES' => 1,
'AC_LIBLTDL_CONVENIENCE' => 1,
'AC_DEPLIBS_CHECK_METHOD' => 1,
- 'AC_LIBLTDL_INSTALLABLE' => 1,
'AM_SET_CURRENT_AUTOMAKE_VERSION' => 1,
+ 'AC_LIBLTDL_INSTALLABLE' => 1,
'AC_LTDL_ENABLE_INSTALL' => 1,
- 'LT_PROG_GCJ' => 1,
'AC_LIBTOOL_SYS_DYNAMIC_LINKER' => 1,
+ 'LT_PROG_GCJ' => 1,
'AM_INIT_AUTOMAKE' => 1,
'AC_DISABLE_STATIC' => 1,
'LT_PATH_NM' => 1,
'AC_LTDL_SHLIBEXT' => 1,
'_LT_AC_LOCK' => 1,
'_LT_AC_LANG_RC_CONFIG' => 1,
'LT_PROG_GO' => 1,
'LT_SYS_MODULE_PATH' => 1,
- 'LT_WITH_LTDL' => 1,
'AC_LIBTOOL_POSTDEP_PREDEP' => 1,
+ 'LT_WITH_LTDL' => 1,
'AC_LTDL_SHLIBPATH' => 1,
'AM_AUX_DIR_EXPAND' => 1,
- 'AC_LIBTOOL_PROG_COMPILER_NO_RTTI' => 1,
'_LT_AC_LANG_F77_CONFIG' => 1,
- '_LT_COMPILER_OPTION' => 1,
+ 'AC_LIBTOOL_PROG_COMPILER_NO_RTTI' => 1,
'_AM_SET_OPTIONS' => 1,
- 'AM_RUN_LOG' => 1,
+ '_LT_COMPILER_OPTION' => 1,
'_AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
- 'AC_LIBTOOL_PICMODE' => 1,
- 'AC_LTDL_SYS_DLOPEN_DEPLIBS' => 1,
+ 'AM_RUN_LOG' => 1,
'AC_LIBTOOL_SYS_OLD_ARCHIVE' => 1,
- 'AC_CHECK_LIBM' => 1,
+ 'AC_LTDL_SYS_DLOPEN_DEPLIBS' => 1,
+ 'AC_LIBTOOL_PICMODE' => 1,
'LT_PATH_LD' => 1,
+ 'AC_CHECK_LIBM' => 1,
'AC_LIBTOOL_SYS_LIB_STRIP' => 1,
'_AM_MANGLE_OPTION' => 1,
- 'AC_LIBTOOL_SYS_MAX_CMD_LEN' => 1,
'AC_LTDL_SYMBOL_USCORE' => 1,
+ 'AC_LIBTOOL_SYS_MAX_CMD_LEN' => 1,
'AM_SET_DEPDIR' => 1,
- 'PKG_PROG_PKG_CONFIG' => 1,
'_LT_CC_BASENAME' => 1,
+ 'PKG_PROG_PKG_CONFIG' => 1,
'_LT_LIBOBJ' => 1
}
], 'Autom4te::Request' ),
bless( [
'3',
1,
[
'/usr/share/autoconf'
],
[
'/usr/share/autoconf/autoconf/autoconf.m4f',
'/usr/share/aclocal/argz.m4',
'/usr/share/aclocal/ltdl.m4',
'/usr/share/aclocal/pkg.m4',
'/usr/share/aclocal-1.11/amversion.m4',
'/usr/share/aclocal-1.11/auxdir.m4',
'/usr/share/aclocal-1.11/cond.m4',
'/usr/share/aclocal-1.11/depend.m4',
'/usr/share/aclocal-1.11/depout.m4',
'/usr/share/aclocal-1.11/init.m4',
'/usr/share/aclocal-1.11/install-sh.m4',
'/usr/share/aclocal-1.11/lead-dot.m4',
'/usr/share/aclocal-1.11/make.m4',
'/usr/share/aclocal-1.11/missing.m4',
'/usr/share/aclocal-1.11/mkdirp.m4',
'/usr/share/aclocal-1.11/options.m4',
'/usr/share/aclocal-1.11/runlog.m4',
'/usr/share/aclocal-1.11/sanity.m4',
'/usr/share/aclocal-1.11/silent.m4',
'/usr/share/aclocal-1.11/strip.m4',
'/usr/share/aclocal-1.11/substnot.m4',
'/usr/share/aclocal-1.11/tar.m4',
'm4/libtool.m4',
'm4/ltoptions.m4',
'm4/ltsugar.m4',
'm4/ltversion.m4',
'm4/lt~obsolete.m4',
'configure.ac'
],
{
'AM_ENABLE_STATIC' => 1,
'AC_LIBTOOL_LANG_RC_CONFIG' => 1,
'_LT_AC_SHELL_INIT' => 1,
'AC_DEFUN' => 1,
'AC_PROG_LIBTOOL' => 1,
'_LT_AC_LANG_CXX_CONFIG' => 1,
'AM_PROG_MKDIR_P' => 1,
'AM_AUTOMAKE_VERSION' => 1,
'AM_SUBST_NOTMAKE' => 1,
'AM_MISSING_PROG' => 1,
'AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH' => 1,
'_LT_AC_LANG_C_CONFIG' => 1,
'AM_PROG_INSTALL_STRIP' => 1,
'_m4_warn' => 1,
'AC_LIBTOOL_OBJDIR' => 1,
'gl_FUNC_ARGZ' => 1,
- 'AM_SANITY_CHECK' => 1,
'LTOBSOLETE_VERSION' => 1,
+ 'AM_SANITY_CHECK' => 1,
'AC_LIBTOOL_LANG_GCJ_CONFIG' => 1,
'AC_LIBTOOL_PROG_COMPILER_PIC' => 1,
'LT_LIB_M' => 1,
'_LT_AC_CHECK_DLFCN' => 1,
'AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE' => 1,
'LTSUGAR_VERSION' => 1,
'_LT_PROG_LTMAIN' => 1,
'LT_SYS_SYMBOL_USCORE' => 1,
'_AM_PROG_TAR' => 1,
'AC_LIBTOOL_GCJ' => 1,
'_LT_WITH_SYSROOT' => 1,
'LT_SYS_DLOPEN_DEPLIBS' => 1,
'LT_FUNC_DLSYM_USCORE' => 1,
- '_LT_AC_LANG_F77' => 1,
'AC_LIBTOOL_CONFIG' => 1,
- 'AC_LTDL_DLLIB' => 1,
+ '_LT_AC_LANG_F77' => 1,
'_AM_SUBST_NOTMAKE' => 1,
+ 'AC_LTDL_DLLIB' => 1,
'_AM_AUTOCONF_VERSION' => 1,
'AM_DISABLE_SHARED' => 1,
'_LT_PROG_ECHO_BACKSLASH' => 1,
'_LTDL_SETUP' => 1,
- 'AM_PROG_LIBTOOL' => 1,
'_LT_AC_LANG_CXX' => 1,
- 'AM_PROG_LD' => 1,
- '_LT_AC_FILE_LTDLL_C' => 1,
+ 'AM_PROG_LIBTOOL' => 1,
'AC_LIB_LTDL' => 1,
+ '_LT_AC_FILE_LTDLL_C' => 1,
+ 'AM_PROG_LD' => 1,
'AU_DEFUN' => 1,
'AC_PROG_NM' => 1,
'AC_LIBTOOL_DLOPEN' => 1,
'AC_PROG_LD' => 1,
'AC_PROG_LD_GNU' => 1,
'AC_ENABLE_FAST_INSTALL' => 1,
'AC_LIBTOOL_FC' => 1,
'LTDL_CONVENIENCE' => 1,
'_AM_SET_OPTION' => 1,
'AC_LTDL_PREOPEN' => 1,
'_LT_LINKER_BOILERPLATE' => 1,
'_LT_PREPARE_SED_QUOTE_VARS' => 1,
'AC_LIBTOOL_LANG_CXX_CONFIG' => 1,
'AC_LIBTOOL_PROG_CC_C_O' => 1,
'gl_PREREQ_ARGZ' => 1,
'LT_SUPPORTED_TAG' => 1,
'AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
'LT_PROG_RC' => 1,
'LT_SYS_MODULE_EXT' => 1,
'AC_DEFUN_ONCE' => 1,
'_LT_AC_LANG_GCJ' => 1,
'AC_LTDL_OBJDIR' => 1,
'_LT_PATH_TOOL_PREFIX' => 1,
'AC_LIBTOOL_RC' => 1,
- '_LT_AC_PROG_ECHO_BACKSLASH' => 1,
- 'AC_DISABLE_FAST_INSTALL' => 1,
'AM_SILENT_RULES' => 1,
- 'include' => 1,
- '_LT_AC_TRY_DLOPEN_SELF' => 1,
+ 'AC_DISABLE_FAST_INSTALL' => 1,
+ '_LT_AC_PROG_ECHO_BACKSLASH' => 1,
'_LT_AC_SYS_LIBPATH_AIX' => 1,
+ '_LT_AC_TRY_DLOPEN_SELF' => 1,
+ 'include' => 1,
'LT_AC_PROG_SED' => 1,
'AM_ENABLE_SHARED' => 1,
'LTDL_INSTALLABLE' => 1,
'_LT_AC_LANG_GCJ_CONFIG' => 1,
'AC_ENABLE_SHARED' => 1,
- '_LT_REQUIRED_DARWIN_CHECKS' => 1,
- 'AC_LIBTOOL_SYS_HARD_LINK_LOCKS' => 1,
'AC_ENABLE_STATIC' => 1,
+ 'AC_LIBTOOL_SYS_HARD_LINK_LOCKS' => 1,
+ '_LT_REQUIRED_DARWIN_CHECKS' => 1,
'_LT_AC_TAGVAR' => 1,
'AC_LIBTOOL_LANG_F77_CONFIG' => 1,
'AM_CONDITIONAL' => 1,
'LT_LIB_DLLOAD' => 1,
- '_LT_PROG_CXX' => 1,
- '_LT_PROG_F77' => 1,
- 'LTVERSION_VERSION' => 1,
'LTDL_INIT' => 1,
- 'm4_include' => 1,
+ 'LTVERSION_VERSION' => 1,
+ '_LT_PROG_F77' => 1,
+ '_LT_PROG_CXX' => 1,
'AM_PROG_INSTALL_SH' => 1,
+ 'm4_include' => 1,
'AC_PROG_EGREP' => 1,
- 'AC_PATH_MAGIC' => 1,
'_AC_AM_CONFIG_HEADER_HOOK' => 1,
+ 'AC_PATH_MAGIC' => 1,
'AC_LTDL_SYSSEARCHPATH' => 1,
'AM_MAKE_INCLUDE' => 1,
'LT_CMD_MAX_LEN' => 1,
'_LT_AC_TAGCONFIG' => 1,
'm4_pattern_forbid' => 1,
'_LT_LINKER_OPTION' => 1,
'AC_LIBTOOL_COMPILER_OPTION' => 1,
'AC_DISABLE_SHARED' => 1,
'_LT_COMPILER_BOILERPLATE' => 1,
'AC_LIBTOOL_WIN32_DLL' => 1,
'AC_LIBTOOL_SETUP' => 1,
'AC_PROG_LD_RELOAD_FLAG' => 1,
'AC_LTDL_DLSYM_USCORE' => 1,
'AM_MISSING_HAS_RUN' => 1,
'LT_LANG' => 1,
'LT_SYS_DLSEARCH_PATH' => 1,
'LT_CONFIG_LTDL_DIR' => 1,
'AC_LIBTOOL_DLOPEN_SELF' => 1,
'LT_OUTPUT' => 1,
'AC_LIBTOOL_PROG_LD_SHLIBS' => 1,
'_PKG_SHORT_ERRORS_SUPPORTED' => 1,
'AC_WITH_LTDL' => 1,
'AC_LIBTOOL_LINKER_OPTION' => 1,
'PKG_CHECK_EXISTS' => 1,
'LT_AC_PROG_RC' => 1,
'AC_LIBTOOL_CXX' => 1,
'LT_INIT' => 1,
'LT_AC_PROG_GCJ' => 1,
'LT_SYS_DLOPEN_SELF' => 1,
'_LT_AC_PROG_CXXCPP' => 1,
'AM_DEP_TRACK' => 1,
'AM_DISABLE_STATIC' => 1,
'_AC_PROG_LIBTOOL' => 1,
'_AM_IF_OPTION' => 1,
'AC_PATH_TOOL_PREFIX' => 1,
- 'm4_pattern_allow' => 1,
'AC_LIBTOOL_F77' => 1,
+ 'm4_pattern_allow' => 1,
'AM_SET_LEADING_DOT' => 1,
- '_LT_PROG_FC' => 1,
'LT_AC_PROG_EGREP' => 1,
+ '_LT_PROG_FC' => 1,
'_AM_DEPENDENCIES' => 1,
'AC_LIBTOOL_LANG_C_CONFIG' => 1,
'LTOPTIONS_VERSION' => 1,
'_LT_AC_SYS_COMPILER' => 1,
'AM_PROG_NM' => 1,
'PKG_CHECK_MODULES' => 1,
'AC_LIBLTDL_CONVENIENCE' => 1,
'AC_DEPLIBS_CHECK_METHOD' => 1,
- 'AC_LIBLTDL_INSTALLABLE' => 1,
'AM_SET_CURRENT_AUTOMAKE_VERSION' => 1,
+ 'AC_LIBLTDL_INSTALLABLE' => 1,
'AC_LTDL_ENABLE_INSTALL' => 1,
- 'LT_PROG_GCJ' => 1,
'AC_LIBTOOL_SYS_DYNAMIC_LINKER' => 1,
+ 'LT_PROG_GCJ' => 1,
'AM_INIT_AUTOMAKE' => 1,
'AC_DISABLE_STATIC' => 1,
'LT_PATH_NM' => 1,
'AC_LTDL_SHLIBEXT' => 1,
'_LT_AC_LOCK' => 1,
'_LT_AC_LANG_RC_CONFIG' => 1,
'LT_PROG_GO' => 1,
'LT_SYS_MODULE_PATH' => 1,
- 'LT_WITH_LTDL' => 1,
'AC_LIBTOOL_POSTDEP_PREDEP' => 1,
+ 'LT_WITH_LTDL' => 1,
'AC_LTDL_SHLIBPATH' => 1,
'AM_AUX_DIR_EXPAND' => 1,
- 'AC_LIBTOOL_PROG_COMPILER_NO_RTTI' => 1,
'_LT_AC_LANG_F77_CONFIG' => 1,
- '_LT_COMPILER_OPTION' => 1,
+ 'AC_LIBTOOL_PROG_COMPILER_NO_RTTI' => 1,
'_AM_SET_OPTIONS' => 1,
- 'AM_RUN_LOG' => 1,
+ '_LT_COMPILER_OPTION' => 1,
'_AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
- 'AC_LIBTOOL_PICMODE' => 1,
- 'AC_LTDL_SYS_DLOPEN_DEPLIBS' => 1,
+ 'AM_RUN_LOG' => 1,
'AC_LIBTOOL_SYS_OLD_ARCHIVE' => 1,
- 'AC_CHECK_LIBM' => 1,
+ 'AC_LTDL_SYS_DLOPEN_DEPLIBS' => 1,
+ 'AC_LIBTOOL_PICMODE' => 1,
'LT_PATH_LD' => 1,
+ 'AC_CHECK_LIBM' => 1,
'AC_LIBTOOL_SYS_LIB_STRIP' => 1,
'_AM_MANGLE_OPTION' => 1,
- 'AC_LIBTOOL_SYS_MAX_CMD_LEN' => 1,
'AC_LTDL_SYMBOL_USCORE' => 1,
+ 'AC_LIBTOOL_SYS_MAX_CMD_LEN' => 1,
'AM_SET_DEPDIR' => 1,
- 'PKG_PROG_PKG_CONFIG' => 1,
'_LT_CC_BASENAME' => 1,
+ 'PKG_PROG_PKG_CONFIG' => 1,
'_LT_LIBOBJ' => 1
}
], 'Autom4te::Request' ),
bless( [
'4',
1,
[
'/usr/share/autoconf'
],
[
'/usr/share/autoconf/autoconf/autoconf.m4f',
'/usr/share/aclocal/argz.m4',
'/usr/share/aclocal/ltdl.m4',
'/usr/share/aclocal/lt~obsolete.m4',
'/usr/share/aclocal/pkg.m4',
'/usr/share/aclocal-1.11/amversion.m4',
'/usr/share/aclocal-1.11/auxdir.m4',
'/usr/share/aclocal-1.11/cond.m4',
'/usr/share/aclocal-1.11/depend.m4',
'/usr/share/aclocal-1.11/depout.m4',
'/usr/share/aclocal-1.11/init.m4',
'/usr/share/aclocal-1.11/install-sh.m4',
'/usr/share/aclocal-1.11/lead-dot.m4',
'/usr/share/aclocal-1.11/make.m4',
'/usr/share/aclocal-1.11/missing.m4',
'/usr/share/aclocal-1.11/mkdirp.m4',
'/usr/share/aclocal-1.11/options.m4',
'/usr/share/aclocal-1.11/runlog.m4',
'/usr/share/aclocal-1.11/sanity.m4',
'/usr/share/aclocal-1.11/silent.m4',
'/usr/share/aclocal-1.11/strip.m4',
'/usr/share/aclocal-1.11/substnot.m4',
'/usr/share/aclocal-1.11/tar.m4',
'm4/libtool.m4',
'm4/ltoptions.m4',
'm4/ltsugar.m4',
'm4/ltversion.m4',
'm4/lt~obsolete.m4',
'configure.ac'
],
{
'AM_ENABLE_STATIC' => 1,
'AC_LIBTOOL_LANG_RC_CONFIG' => 1,
'_LT_AC_SHELL_INIT' => 1,
'AC_DEFUN' => 1,
'AC_PROG_LIBTOOL' => 1,
'_LT_AC_LANG_CXX_CONFIG' => 1,
'AM_PROG_MKDIR_P' => 1,
'AM_AUTOMAKE_VERSION' => 1,
'AM_SUBST_NOTMAKE' => 1,
'AM_MISSING_PROG' => 1,
'AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH' => 1,
'_LT_AC_LANG_C_CONFIG' => 1,
'AM_PROG_INSTALL_STRIP' => 1,
'_m4_warn' => 1,
'AC_LIBTOOL_OBJDIR' => 1,
'gl_FUNC_ARGZ' => 1,
- 'AM_SANITY_CHECK' => 1,
'LTOBSOLETE_VERSION' => 1,
+ 'AM_SANITY_CHECK' => 1,
'AC_LIBTOOL_LANG_GCJ_CONFIG' => 1,
'AC_LIBTOOL_PROG_COMPILER_PIC' => 1,
'LT_LIB_M' => 1,
'_LT_AC_CHECK_DLFCN' => 1,
'AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE' => 1,
'LTSUGAR_VERSION' => 1,
'_LT_PROG_LTMAIN' => 1,
'LT_SYS_SYMBOL_USCORE' => 1,
'_AM_PROG_TAR' => 1,
'AC_LIBTOOL_GCJ' => 1,
'LT_SYS_DLOPEN_DEPLIBS' => 1,
'LT_FUNC_DLSYM_USCORE' => 1,
'_LT_AC_LANG_F77' => 1,
'AC_LIBTOOL_CONFIG' => 1,
- 'AC_LTDL_DLLIB' => 1,
'_AM_SUBST_NOTMAKE' => 1,
+ 'AC_LTDL_DLLIB' => 1,
'_AM_AUTOCONF_VERSION' => 1,
'AM_DISABLE_SHARED' => 1,
'_LT_PROG_ECHO_BACKSLASH' => 1,
'_LTDL_SETUP' => 1,
- 'AM_PROG_LIBTOOL' => 1,
'_LT_AC_LANG_CXX' => 1,
- 'AM_PROG_LD' => 1,
- '_LT_AC_FILE_LTDLL_C' => 1,
+ 'AM_PROG_LIBTOOL' => 1,
'AC_LIB_LTDL' => 1,
+ '_LT_AC_FILE_LTDLL_C' => 1,
+ 'AM_PROG_LD' => 1,
'AU_DEFUN' => 1,
'AC_PROG_NM' => 1,
'AC_LIBTOOL_DLOPEN' => 1,
'AC_PROG_LD' => 1,
'AC_PROG_LD_GNU' => 1,
'AC_ENABLE_FAST_INSTALL' => 1,
'AC_LIBTOOL_FC' => 1,
'LTDL_CONVENIENCE' => 1,
'_AM_SET_OPTION' => 1,
'AC_LTDL_PREOPEN' => 1,
'_LT_LINKER_BOILERPLATE' => 1,
'_LT_PREPARE_SED_QUOTE_VARS' => 1,
'AC_LIBTOOL_LANG_CXX_CONFIG' => 1,
'AC_LIBTOOL_PROG_CC_C_O' => 1,
'gl_PREREQ_ARGZ' => 1,
'LT_SUPPORTED_TAG' => 1,
'AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
'LT_PROG_RC' => 1,
'LT_SYS_MODULE_EXT' => 1,
'AC_DEFUN_ONCE' => 1,
'_LT_AC_LANG_GCJ' => 1,
'AC_LTDL_OBJDIR' => 1,
'_LT_PATH_TOOL_PREFIX' => 1,
'AC_LIBTOOL_RC' => 1,
- '_LT_AC_PROG_ECHO_BACKSLASH' => 1,
- 'AC_DISABLE_FAST_INSTALL' => 1,
'AM_SILENT_RULES' => 1,
- 'include' => 1,
- '_LT_AC_TRY_DLOPEN_SELF' => 1,
+ 'AC_DISABLE_FAST_INSTALL' => 1,
+ '_LT_AC_PROG_ECHO_BACKSLASH' => 1,
'_LT_AC_SYS_LIBPATH_AIX' => 1,
+ '_LT_AC_TRY_DLOPEN_SELF' => 1,
+ 'include' => 1,
'LT_AC_PROG_SED' => 1,
'AM_ENABLE_SHARED' => 1,
'LTDL_INSTALLABLE' => 1,
'_LT_AC_LANG_GCJ_CONFIG' => 1,
'AC_ENABLE_SHARED' => 1,
- '_LT_REQUIRED_DARWIN_CHECKS' => 1,
- 'AC_LIBTOOL_SYS_HARD_LINK_LOCKS' => 1,
'AC_ENABLE_STATIC' => 1,
+ 'AC_LIBTOOL_SYS_HARD_LINK_LOCKS' => 1,
+ '_LT_REQUIRED_DARWIN_CHECKS' => 1,
'_LT_AC_TAGVAR' => 1,
'AC_LIBTOOL_LANG_F77_CONFIG' => 1,
'AM_CONDITIONAL' => 1,
'LT_LIB_DLLOAD' => 1,
- '_LT_PROG_CXX' => 1,
- '_LT_PROG_F77' => 1,
- 'LTVERSION_VERSION' => 1,
'LTDL_INIT' => 1,
- 'm4_include' => 1,
+ 'LTVERSION_VERSION' => 1,
+ '_LT_PROG_F77' => 1,
+ '_LT_PROG_CXX' => 1,
'AM_PROG_INSTALL_SH' => 1,
+ 'm4_include' => 1,
'AC_PROG_EGREP' => 1,
- 'AC_PATH_MAGIC' => 1,
'_AC_AM_CONFIG_HEADER_HOOK' => 1,
+ 'AC_PATH_MAGIC' => 1,
'AC_LTDL_SYSSEARCHPATH' => 1,
'AM_MAKE_INCLUDE' => 1,
'LT_CMD_MAX_LEN' => 1,
'_LT_AC_TAGCONFIG' => 1,
'm4_pattern_forbid' => 1,
'_LT_LINKER_OPTION' => 1,
'AC_LIBTOOL_COMPILER_OPTION' => 1,
'AC_DISABLE_SHARED' => 1,
'_LT_COMPILER_BOILERPLATE' => 1,
'AC_LIBTOOL_WIN32_DLL' => 1,
'AC_LIBTOOL_SETUP' => 1,
'AC_PROG_LD_RELOAD_FLAG' => 1,
'AC_LTDL_DLSYM_USCORE' => 1,
'AM_MISSING_HAS_RUN' => 1,
'LT_LANG' => 1,
'LT_SYS_DLSEARCH_PATH' => 1,
'LT_CONFIG_LTDL_DIR' => 1,
'AC_LIBTOOL_DLOPEN_SELF' => 1,
'LT_OUTPUT' => 1,
'AC_LIBTOOL_PROG_LD_SHLIBS' => 1,
'_PKG_SHORT_ERRORS_SUPPORTED' => 1,
'AC_WITH_LTDL' => 1,
'AC_LIBTOOL_LINKER_OPTION' => 1,
'PKG_CHECK_EXISTS' => 1,
'LT_AC_PROG_RC' => 1,
'AC_LIBTOOL_CXX' => 1,
'LT_INIT' => 1,
'LT_AC_PROG_GCJ' => 1,
'LT_SYS_DLOPEN_SELF' => 1,
'_LT_AC_PROG_CXXCPP' => 1,
'AM_DEP_TRACK' => 1,
'AM_DISABLE_STATIC' => 1,
'_AC_PROG_LIBTOOL' => 1,
'_AM_IF_OPTION' => 1,
'AC_PATH_TOOL_PREFIX' => 1,
- 'm4_pattern_allow' => 1,
'AC_LIBTOOL_F77' => 1,
+ 'm4_pattern_allow' => 1,
'AM_SET_LEADING_DOT' => 1,
- '_LT_PROG_FC' => 1,
'LT_AC_PROG_EGREP' => 1,
+ '_LT_PROG_FC' => 1,
'_AM_DEPENDENCIES' => 1,
'AC_LIBTOOL_LANG_C_CONFIG' => 1,
'LTOPTIONS_VERSION' => 1,
'_LT_AC_SYS_COMPILER' => 1,
'AM_PROG_NM' => 1,
'PKG_CHECK_MODULES' => 1,
'AC_LIBLTDL_CONVENIENCE' => 1,
'AC_DEPLIBS_CHECK_METHOD' => 1,
- 'AC_LIBLTDL_INSTALLABLE' => 1,
'AM_SET_CURRENT_AUTOMAKE_VERSION' => 1,
+ 'AC_LIBLTDL_INSTALLABLE' => 1,
'AC_LTDL_ENABLE_INSTALL' => 1,
- 'LT_PROG_GCJ' => 1,
'AC_LIBTOOL_SYS_DYNAMIC_LINKER' => 1,
+ 'LT_PROG_GCJ' => 1,
'AM_INIT_AUTOMAKE' => 1,
'AC_DISABLE_STATIC' => 1,
'LT_PATH_NM' => 1,
'AC_LTDL_SHLIBEXT' => 1,
'_LT_AC_LOCK' => 1,
'_LT_AC_LANG_RC_CONFIG' => 1,
'LT_SYS_MODULE_PATH' => 1,
- 'LT_WITH_LTDL' => 1,
'AC_LIBTOOL_POSTDEP_PREDEP' => 1,
+ 'LT_WITH_LTDL' => 1,
'AC_LTDL_SHLIBPATH' => 1,
'AM_AUX_DIR_EXPAND' => 1,
- 'AC_LIBTOOL_PROG_COMPILER_NO_RTTI' => 1,
'_LT_AC_LANG_F77_CONFIG' => 1,
- '_LT_COMPILER_OPTION' => 1,
+ 'AC_LIBTOOL_PROG_COMPILER_NO_RTTI' => 1,
'_AM_SET_OPTIONS' => 1,
- 'AM_RUN_LOG' => 1,
+ '_LT_COMPILER_OPTION' => 1,
'_AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
- 'AC_LIBTOOL_PICMODE' => 1,
- 'AC_LTDL_SYS_DLOPEN_DEPLIBS' => 1,
+ 'AM_RUN_LOG' => 1,
'AC_LIBTOOL_SYS_OLD_ARCHIVE' => 1,
- 'AC_CHECK_LIBM' => 1,
+ 'AC_LTDL_SYS_DLOPEN_DEPLIBS' => 1,
+ 'AC_LIBTOOL_PICMODE' => 1,
'LT_PATH_LD' => 1,
+ 'AC_CHECK_LIBM' => 1,
'AC_LIBTOOL_SYS_LIB_STRIP' => 1,
'_AM_MANGLE_OPTION' => 1,
- 'AC_LIBTOOL_SYS_MAX_CMD_LEN' => 1,
'AC_LTDL_SYMBOL_USCORE' => 1,
+ 'AC_LIBTOOL_SYS_MAX_CMD_LEN' => 1,
'AM_SET_DEPDIR' => 1,
- 'PKG_PROG_PKG_CONFIG' => 1,
'_LT_CC_BASENAME' => 1,
+ 'PKG_PROG_PKG_CONFIG' => 1,
'_LT_LIBOBJ' => 1
}
], 'Autom4te::Request' ),
bless( [
'5',
1,
[
'/usr/share/autoconf'
],
[
'/usr/share/autoconf/autoconf/autoconf.m4f',
'/usr/share/aclocal/argz.m4',
'/usr/share/aclocal/libtool.m4',
'/usr/share/aclocal/ltdl.m4',
'/usr/share/aclocal/ltoptions.m4',
'/usr/share/aclocal/ltsugar.m4',
'/usr/share/aclocal/ltversion.m4',
'/usr/share/aclocal/lt~obsolete.m4',
'/usr/share/aclocal/pkg.m4',
'/usr/share/aclocal-1.12/amversion.m4',
'/usr/share/aclocal-1.12/auxdir.m4',
'/usr/share/aclocal-1.12/cond.m4',
'/usr/share/aclocal-1.12/depend.m4',
'/usr/share/aclocal-1.12/depout.m4',
'/usr/share/aclocal-1.12/init.m4',
'/usr/share/aclocal-1.12/install-sh.m4',
'/usr/share/aclocal-1.12/lead-dot.m4',
'/usr/share/aclocal-1.12/make.m4',
'/usr/share/aclocal-1.12/missing.m4',
'/usr/share/aclocal-1.12/options.m4',
'/usr/share/aclocal-1.12/runlog.m4',
'/usr/share/aclocal-1.12/sanity.m4',
'/usr/share/aclocal-1.12/silent.m4',
'/usr/share/aclocal-1.12/strip.m4',
'/usr/share/aclocal-1.12/substnot.m4',
'/usr/share/aclocal-1.12/tar.m4',
'configure.ac'
],
{
'AM_ENABLE_STATIC' => 1,
'AC_LIBTOOL_LANG_RC_CONFIG' => 1,
'_LT_AC_SHELL_INIT' => 1,
'AC_DEFUN' => 1,
'AC_PROG_LIBTOOL' => 1,
'_LT_AC_LANG_CXX_CONFIG' => 1,
'AM_AUTOMAKE_VERSION' => 1,
'AM_SUBST_NOTMAKE' => 1,
'AM_MISSING_PROG' => 1,
'AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH' => 1,
'_LT_AC_LANG_C_CONFIG' => 1,
'AM_PROG_INSTALL_STRIP' => 1,
'_m4_warn' => 1,
'AC_LIBTOOL_OBJDIR' => 1,
'gl_FUNC_ARGZ' => 1,
- 'AM_SANITY_CHECK' => 1,
'LTOBSOLETE_VERSION' => 1,
+ 'AM_SANITY_CHECK' => 1,
'AC_LIBTOOL_LANG_GCJ_CONFIG' => 1,
'AC_LIBTOOL_PROG_COMPILER_PIC' => 1,
'LT_LIB_M' => 1,
'_LT_AC_CHECK_DLFCN' => 1,
'AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE' => 1,
'LTSUGAR_VERSION' => 1,
'_LT_PROG_LTMAIN' => 1,
'LT_SYS_SYMBOL_USCORE' => 1,
'_AM_PROG_TAR' => 1,
'AC_LIBTOOL_GCJ' => 1,
'_LT_WITH_SYSROOT' => 1,
'LT_SYS_DLOPEN_DEPLIBS' => 1,
'LT_FUNC_DLSYM_USCORE' => 1,
'_LT_AC_LANG_F77' => 1,
'AC_LIBTOOL_CONFIG' => 1,
- '_AM_SUBST_NOTMAKE' => 1,
'AC_LTDL_DLLIB' => 1,
+ '_AM_SUBST_NOTMAKE' => 1,
'_AM_AUTOCONF_VERSION' => 1,
'AM_DISABLE_SHARED' => 1,
'_LT_PROG_ECHO_BACKSLASH' => 1,
'_LTDL_SETUP' => 1,
- '_LT_AC_LANG_CXX' => 1,
'AM_PROG_LIBTOOL' => 1,
- 'AC_LIB_LTDL' => 1,
- '_LT_AC_FILE_LTDLL_C' => 1,
+ '_LT_AC_LANG_CXX' => 1,
'AM_PROG_LD' => 1,
+ '_LT_AC_FILE_LTDLL_C' => 1,
+ 'AC_LIB_LTDL' => 1,
'AU_DEFUN' => 1,
'AC_PROG_NM' => 1,
'AC_LIBTOOL_DLOPEN' => 1,
'AC_PROG_LD' => 1,
'AC_PROG_LD_GNU' => 1,
'AC_ENABLE_FAST_INSTALL' => 1,
'AC_LIBTOOL_FC' => 1,
'LTDL_CONVENIENCE' => 1,
'_AM_SET_OPTION' => 1,
'AC_LTDL_PREOPEN' => 1,
'_LT_LINKER_BOILERPLATE' => 1,
'_LT_PREPARE_SED_QUOTE_VARS' => 1,
'AC_LIBTOOL_LANG_CXX_CONFIG' => 1,
'AC_LIBTOOL_PROG_CC_C_O' => 1,
'gl_PREREQ_ARGZ' => 1,
'LT_SUPPORTED_TAG' => 1,
'AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
'LT_PROG_RC' => 1,
'LT_SYS_MODULE_EXT' => 1,
'AC_DEFUN_ONCE' => 1,
'_LT_AC_LANG_GCJ' => 1,
'AC_LTDL_OBJDIR' => 1,
'_LT_PATH_TOOL_PREFIX' => 1,
'AC_LIBTOOL_RC' => 1,
- 'AM_SILENT_RULES' => 1,
- 'AC_DISABLE_FAST_INSTALL' => 1,
'_LT_AC_PROG_ECHO_BACKSLASH' => 1,
- '_LT_AC_SYS_LIBPATH_AIX' => 1,
- '_LT_AC_TRY_DLOPEN_SELF' => 1,
+ 'AC_DISABLE_FAST_INSTALL' => 1,
+ 'AM_SILENT_RULES' => 1,
'include' => 1,
+ '_LT_AC_TRY_DLOPEN_SELF' => 1,
+ '_LT_AC_SYS_LIBPATH_AIX' => 1,
'LT_AC_PROG_SED' => 1,
'AM_ENABLE_SHARED' => 1,
'LTDL_INSTALLABLE' => 1,
'_LT_AC_LANG_GCJ_CONFIG' => 1,
'AC_ENABLE_SHARED' => 1,
- 'AC_ENABLE_STATIC' => 1,
- 'AC_LIBTOOL_SYS_HARD_LINK_LOCKS' => 1,
'_LT_REQUIRED_DARWIN_CHECKS' => 1,
+ 'AC_LIBTOOL_SYS_HARD_LINK_LOCKS' => 1,
+ 'AC_ENABLE_STATIC' => 1,
'_LT_AC_TAGVAR' => 1,
'AC_LIBTOOL_LANG_F77_CONFIG' => 1,
'AM_CONDITIONAL' => 1,
'LT_LIB_DLLOAD' => 1,
- 'LTDL_INIT' => 1,
- '_LT_PROG_F77' => 1,
- '_LT_PROG_CXX' => 1,
'LTVERSION_VERSION' => 1,
- 'AM_PROG_INSTALL_SH' => 1,
+ '_LT_PROG_CXX' => 1,
+ '_LT_PROG_F77' => 1,
+ 'LTDL_INIT' => 1,
'm4_include' => 1,
+ 'AM_PROG_INSTALL_SH' => 1,
'AC_PROG_EGREP' => 1,
- '_AC_AM_CONFIG_HEADER_HOOK' => 1,
'AC_PATH_MAGIC' => 1,
+ '_AC_AM_CONFIG_HEADER_HOOK' => 1,
'AC_LTDL_SYSSEARCHPATH' => 1,
'AM_MAKE_INCLUDE' => 1,
'LT_CMD_MAX_LEN' => 1,
'_LT_AC_TAGCONFIG' => 1,
'm4_pattern_forbid' => 1,
'_LT_LINKER_OPTION' => 1,
'AC_LIBTOOL_COMPILER_OPTION' => 1,
'PKG_INSTALLDIR' => 1,
'AC_DISABLE_SHARED' => 1,
'_LT_COMPILER_BOILERPLATE' => 1,
'AC_LIBTOOL_WIN32_DLL' => 1,
'AC_LIBTOOL_SETUP' => 1,
'AC_PROG_LD_RELOAD_FLAG' => 1,
'AC_LTDL_DLSYM_USCORE' => 1,
'AM_MISSING_HAS_RUN' => 1,
'LT_LANG' => 1,
'LT_SYS_DLSEARCH_PATH' => 1,
'LT_CONFIG_LTDL_DIR' => 1,
'AC_LIBTOOL_DLOPEN_SELF' => 1,
'LT_OUTPUT' => 1,
'AC_LIBTOOL_PROG_LD_SHLIBS' => 1,
'_PKG_SHORT_ERRORS_SUPPORTED' => 1,
'AC_WITH_LTDL' => 1,
'AC_LIBTOOL_LINKER_OPTION' => 1,
'PKG_CHECK_EXISTS' => 1,
'LT_AC_PROG_RC' => 1,
'AC_LIBTOOL_CXX' => 1,
'LT_INIT' => 1,
'LT_AC_PROG_GCJ' => 1,
'LT_SYS_DLOPEN_SELF' => 1,
'_LT_AC_PROG_CXXCPP' => 1,
'AM_DEP_TRACK' => 1,
'AM_DISABLE_STATIC' => 1,
'_AC_PROG_LIBTOOL' => 1,
'_AM_IF_OPTION' => 1,
- 'PKG_NOARCH_INSTALLDIR' => 1,
'AC_PATH_TOOL_PREFIX' => 1,
- 'AC_LIBTOOL_F77' => 1,
+ 'PKG_NOARCH_INSTALLDIR' => 1,
'm4_pattern_allow' => 1,
+ 'AC_LIBTOOL_F77' => 1,
'AM_SET_LEADING_DOT' => 1,
- 'LT_AC_PROG_EGREP' => 1,
'_LT_PROG_FC' => 1,
+ 'LT_AC_PROG_EGREP' => 1,
'_AM_DEPENDENCIES' => 1,
'AC_LIBTOOL_LANG_C_CONFIG' => 1,
'LTOPTIONS_VERSION' => 1,
'_LT_AC_SYS_COMPILER' => 1,
'AM_PROG_NM' => 1,
'PKG_CHECK_MODULES' => 1,
'AC_LIBLTDL_CONVENIENCE' => 1,
'AC_DEPLIBS_CHECK_METHOD' => 1,
- 'AM_SET_CURRENT_AUTOMAKE_VERSION' => 1,
'AC_LIBLTDL_INSTALLABLE' => 1,
+ 'AM_SET_CURRENT_AUTOMAKE_VERSION' => 1,
'AC_LTDL_ENABLE_INSTALL' => 1,
- 'AC_LIBTOOL_SYS_DYNAMIC_LINKER' => 1,
'LT_PROG_GCJ' => 1,
+ 'AC_LIBTOOL_SYS_DYNAMIC_LINKER' => 1,
'AM_INIT_AUTOMAKE' => 1,
'AC_DISABLE_STATIC' => 1,
'LT_PATH_NM' => 1,
'AC_LTDL_SHLIBEXT' => 1,
'_LT_AC_LOCK' => 1,
'_LT_AC_LANG_RC_CONFIG' => 1,
'LT_PROG_GO' => 1,
'LT_SYS_MODULE_PATH' => 1,
- 'AC_LIBTOOL_POSTDEP_PREDEP' => 1,
'LT_WITH_LTDL' => 1,
+ 'AC_LIBTOOL_POSTDEP_PREDEP' => 1,
'AC_LTDL_SHLIBPATH' => 1,
'AM_AUX_DIR_EXPAND' => 1,
- '_LT_AC_LANG_F77_CONFIG' => 1,
'AC_LIBTOOL_PROG_COMPILER_NO_RTTI' => 1,
- '_AM_SET_OPTIONS' => 1,
+ '_LT_AC_LANG_F77_CONFIG' => 1,
'_LT_COMPILER_OPTION' => 1,
- '_AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
+ '_AM_SET_OPTIONS' => 1,
'AM_RUN_LOG' => 1,
- 'AC_LIBTOOL_SYS_OLD_ARCHIVE' => 1,
- 'AC_LTDL_SYS_DLOPEN_DEPLIBS' => 1,
+ '_AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
'AC_LIBTOOL_PICMODE' => 1,
- 'LT_PATH_LD' => 1,
+ 'AC_LTDL_SYS_DLOPEN_DEPLIBS' => 1,
+ 'AC_LIBTOOL_SYS_OLD_ARCHIVE' => 1,
'AC_CHECK_LIBM' => 1,
+ 'LT_PATH_LD' => 1,
'AC_LIBTOOL_SYS_LIB_STRIP' => 1,
'_AM_MANGLE_OPTION' => 1,
- 'AC_LTDL_SYMBOL_USCORE' => 1,
'AC_LIBTOOL_SYS_MAX_CMD_LEN' => 1,
+ 'AC_LTDL_SYMBOL_USCORE' => 1,
'AM_SET_DEPDIR' => 1,
- '_LT_CC_BASENAME' => 1,
'PKG_PROG_PKG_CONFIG' => 1,
+ '_LT_CC_BASENAME' => 1,
'_LT_LIBOBJ' => 1
}
], 'Autom4te::Request' ),
bless( [
'6',
1,
[
'/usr/share/autoconf'
],
[
'/usr/share/autoconf/autoconf/autoconf.m4f',
'/usr/share/aclocal/argz.m4',
'/usr/share/aclocal/ltdl.m4',
'/usr/share/aclocal/pkg.m4',
'/usr/share/aclocal-1.12/amversion.m4',
'/usr/share/aclocal-1.12/auxdir.m4',
'/usr/share/aclocal-1.12/cond.m4',
'/usr/share/aclocal-1.12/depend.m4',
'/usr/share/aclocal-1.12/depout.m4',
'/usr/share/aclocal-1.12/init.m4',
'/usr/share/aclocal-1.12/install-sh.m4',
'/usr/share/aclocal-1.12/lead-dot.m4',
'/usr/share/aclocal-1.12/make.m4',
'/usr/share/aclocal-1.12/missing.m4',
'/usr/share/aclocal-1.12/options.m4',
'/usr/share/aclocal-1.12/runlog.m4',
'/usr/share/aclocal-1.12/sanity.m4',
'/usr/share/aclocal-1.12/silent.m4',
'/usr/share/aclocal-1.12/strip.m4',
'/usr/share/aclocal-1.12/substnot.m4',
'/usr/share/aclocal-1.12/tar.m4',
'm4/libtool.m4',
'm4/ltoptions.m4',
'm4/ltsugar.m4',
'm4/ltversion.m4',
'm4/lt~obsolete.m4',
'configure.ac'
],
{
'AM_ENABLE_STATIC' => 1,
'AC_LIBTOOL_LANG_RC_CONFIG' => 1,
'_LT_AC_SHELL_INIT' => 1,
'AC_DEFUN' => 1,
'AC_PROG_LIBTOOL' => 1,
'_LT_AC_LANG_CXX_CONFIG' => 1,
'AM_AUTOMAKE_VERSION' => 1,
'AM_SUBST_NOTMAKE' => 1,
'AM_MISSING_PROG' => 1,
'AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH' => 1,
'_LT_AC_LANG_C_CONFIG' => 1,
'AM_PROG_INSTALL_STRIP' => 1,
'_m4_warn' => 1,
'AC_LIBTOOL_OBJDIR' => 1,
'gl_FUNC_ARGZ' => 1,
- 'AM_SANITY_CHECK' => 1,
'LTOBSOLETE_VERSION' => 1,
+ 'AM_SANITY_CHECK' => 1,
'AC_LIBTOOL_LANG_GCJ_CONFIG' => 1,
'AC_LIBTOOL_PROG_COMPILER_PIC' => 1,
'LT_LIB_M' => 1,
'_LT_AC_CHECK_DLFCN' => 1,
'AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE' => 1,
'LTSUGAR_VERSION' => 1,
'_LT_PROG_LTMAIN' => 1,
'LT_SYS_SYMBOL_USCORE' => 1,
'_AM_PROG_TAR' => 1,
'AC_LIBTOOL_GCJ' => 1,
'_LT_WITH_SYSROOT' => 1,
'LT_SYS_DLOPEN_DEPLIBS' => 1,
'LT_FUNC_DLSYM_USCORE' => 1,
'_LT_AC_LANG_F77' => 1,
'AC_LIBTOOL_CONFIG' => 1,
- '_AM_SUBST_NOTMAKE' => 1,
'AC_LTDL_DLLIB' => 1,
+ '_AM_SUBST_NOTMAKE' => 1,
'_AM_AUTOCONF_VERSION' => 1,
'AM_DISABLE_SHARED' => 1,
'_LT_PROG_ECHO_BACKSLASH' => 1,
'_LTDL_SETUP' => 1,
- '_LT_AC_LANG_CXX' => 1,
'AM_PROG_LIBTOOL' => 1,
- 'AC_LIB_LTDL' => 1,
- '_LT_AC_FILE_LTDLL_C' => 1,
+ '_LT_AC_LANG_CXX' => 1,
'AM_PROG_LD' => 1,
+ '_LT_AC_FILE_LTDLL_C' => 1,
+ 'AC_LIB_LTDL' => 1,
'AU_DEFUN' => 1,
'AC_PROG_NM' => 1,
'AC_LIBTOOL_DLOPEN' => 1,
'AC_PROG_LD' => 1,
'AC_PROG_LD_GNU' => 1,
'AC_ENABLE_FAST_INSTALL' => 1,
'AC_LIBTOOL_FC' => 1,
'LTDL_CONVENIENCE' => 1,
'_AM_SET_OPTION' => 1,
'AC_LTDL_PREOPEN' => 1,
'_LT_LINKER_BOILERPLATE' => 1,
'_LT_PREPARE_SED_QUOTE_VARS' => 1,
'AC_LIBTOOL_LANG_CXX_CONFIG' => 1,
'AC_LIBTOOL_PROG_CC_C_O' => 1,
'gl_PREREQ_ARGZ' => 1,
'LT_SUPPORTED_TAG' => 1,
'AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
'LT_PROG_RC' => 1,
'LT_SYS_MODULE_EXT' => 1,
'AC_DEFUN_ONCE' => 1,
'_LT_AC_LANG_GCJ' => 1,
'AC_LTDL_OBJDIR' => 1,
'_LT_PATH_TOOL_PREFIX' => 1,
'AC_LIBTOOL_RC' => 1,
- 'AM_SILENT_RULES' => 1,
- 'AC_DISABLE_FAST_INSTALL' => 1,
'_LT_AC_PROG_ECHO_BACKSLASH' => 1,
- '_LT_AC_SYS_LIBPATH_AIX' => 1,
- '_LT_AC_TRY_DLOPEN_SELF' => 1,
+ 'AC_DISABLE_FAST_INSTALL' => 1,
+ 'AM_SILENT_RULES' => 1,
'include' => 1,
+ '_LT_AC_TRY_DLOPEN_SELF' => 1,
+ '_LT_AC_SYS_LIBPATH_AIX' => 1,
'LT_AC_PROG_SED' => 1,
'AM_ENABLE_SHARED' => 1,
'LTDL_INSTALLABLE' => 1,
'_LT_AC_LANG_GCJ_CONFIG' => 1,
'AC_ENABLE_SHARED' => 1,
- 'AC_ENABLE_STATIC' => 1,
- 'AC_LIBTOOL_SYS_HARD_LINK_LOCKS' => 1,
'_LT_REQUIRED_DARWIN_CHECKS' => 1,
+ 'AC_LIBTOOL_SYS_HARD_LINK_LOCKS' => 1,
+ 'AC_ENABLE_STATIC' => 1,
'_LT_AC_TAGVAR' => 1,
'AC_LIBTOOL_LANG_F77_CONFIG' => 1,
'AM_CONDITIONAL' => 1,
'LT_LIB_DLLOAD' => 1,
- 'LTDL_INIT' => 1,
- '_LT_PROG_F77' => 1,
- '_LT_PROG_CXX' => 1,
'LTVERSION_VERSION' => 1,
- 'AM_PROG_INSTALL_SH' => 1,
+ '_LT_PROG_CXX' => 1,
+ '_LT_PROG_F77' => 1,
+ 'LTDL_INIT' => 1,
'm4_include' => 1,
+ 'AM_PROG_INSTALL_SH' => 1,
'AC_PROG_EGREP' => 1,
- '_AC_AM_CONFIG_HEADER_HOOK' => 1,
'AC_PATH_MAGIC' => 1,
+ '_AC_AM_CONFIG_HEADER_HOOK' => 1,
'AC_LTDL_SYSSEARCHPATH' => 1,
'AM_MAKE_INCLUDE' => 1,
'LT_CMD_MAX_LEN' => 1,
'_LT_AC_TAGCONFIG' => 1,
'm4_pattern_forbid' => 1,
'_LT_LINKER_OPTION' => 1,
'AC_LIBTOOL_COMPILER_OPTION' => 1,
'PKG_INSTALLDIR' => 1,
'AC_DISABLE_SHARED' => 1,
'_LT_COMPILER_BOILERPLATE' => 1,
'AC_LIBTOOL_WIN32_DLL' => 1,
'AC_LIBTOOL_SETUP' => 1,
'AC_PROG_LD_RELOAD_FLAG' => 1,
'AC_LTDL_DLSYM_USCORE' => 1,
'AM_MISSING_HAS_RUN' => 1,
'LT_LANG' => 1,
'LT_SYS_DLSEARCH_PATH' => 1,
'LT_CONFIG_LTDL_DIR' => 1,
'AC_LIBTOOL_DLOPEN_SELF' => 1,
'LT_OUTPUT' => 1,
'AC_LIBTOOL_PROG_LD_SHLIBS' => 1,
'_PKG_SHORT_ERRORS_SUPPORTED' => 1,
'AC_WITH_LTDL' => 1,
'AC_LIBTOOL_LINKER_OPTION' => 1,
'PKG_CHECK_EXISTS' => 1,
'LT_AC_PROG_RC' => 1,
'AC_LIBTOOL_CXX' => 1,
'LT_INIT' => 1,
'LT_AC_PROG_GCJ' => 1,
'LT_SYS_DLOPEN_SELF' => 1,
'_LT_AC_PROG_CXXCPP' => 1,
'AM_DEP_TRACK' => 1,
'AM_DISABLE_STATIC' => 1,
'_AC_PROG_LIBTOOL' => 1,
'_AM_IF_OPTION' => 1,
- 'PKG_NOARCH_INSTALLDIR' => 1,
'AC_PATH_TOOL_PREFIX' => 1,
- 'AC_LIBTOOL_F77' => 1,
+ 'PKG_NOARCH_INSTALLDIR' => 1,
'm4_pattern_allow' => 1,
+ 'AC_LIBTOOL_F77' => 1,
'AM_SET_LEADING_DOT' => 1,
- 'LT_AC_PROG_EGREP' => 1,
'_LT_PROG_FC' => 1,
+ 'LT_AC_PROG_EGREP' => 1,
'_AM_DEPENDENCIES' => 1,
'AC_LIBTOOL_LANG_C_CONFIG' => 1,
'LTOPTIONS_VERSION' => 1,
'_LT_AC_SYS_COMPILER' => 1,
'AM_PROG_NM' => 1,
'PKG_CHECK_MODULES' => 1,
'AC_LIBLTDL_CONVENIENCE' => 1,
'AC_DEPLIBS_CHECK_METHOD' => 1,
- 'AM_SET_CURRENT_AUTOMAKE_VERSION' => 1,
'AC_LIBLTDL_INSTALLABLE' => 1,
+ 'AM_SET_CURRENT_AUTOMAKE_VERSION' => 1,
'AC_LTDL_ENABLE_INSTALL' => 1,
- 'AC_LIBTOOL_SYS_DYNAMIC_LINKER' => 1,
'LT_PROG_GCJ' => 1,
+ 'AC_LIBTOOL_SYS_DYNAMIC_LINKER' => 1,
'AM_INIT_AUTOMAKE' => 1,
'AC_DISABLE_STATIC' => 1,
'LT_PATH_NM' => 1,
'AC_LTDL_SHLIBEXT' => 1,
'_LT_AC_LOCK' => 1,
'_LT_AC_LANG_RC_CONFIG' => 1,
'LT_PROG_GO' => 1,
'LT_SYS_MODULE_PATH' => 1,
- 'AC_LIBTOOL_POSTDEP_PREDEP' => 1,
'LT_WITH_LTDL' => 1,
+ 'AC_LIBTOOL_POSTDEP_PREDEP' => 1,
'AC_LTDL_SHLIBPATH' => 1,
'AM_AUX_DIR_EXPAND' => 1,
- '_LT_AC_LANG_F77_CONFIG' => 1,
'AC_LIBTOOL_PROG_COMPILER_NO_RTTI' => 1,
- '_AM_SET_OPTIONS' => 1,
+ '_LT_AC_LANG_F77_CONFIG' => 1,
'_LT_COMPILER_OPTION' => 1,
- '_AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
+ '_AM_SET_OPTIONS' => 1,
'AM_RUN_LOG' => 1,
- 'AC_LIBTOOL_SYS_OLD_ARCHIVE' => 1,
- 'AC_LTDL_SYS_DLOPEN_DEPLIBS' => 1,
+ '_AM_OUTPUT_DEPENDENCY_COMMANDS' => 1,
'AC_LIBTOOL_PICMODE' => 1,
- 'LT_PATH_LD' => 1,
+ 'AC_LTDL_SYS_DLOPEN_DEPLIBS' => 1,
+ 'AC_LIBTOOL_SYS_OLD_ARCHIVE' => 1,
'AC_CHECK_LIBM' => 1,
+ 'LT_PATH_LD' => 1,
'AC_LIBTOOL_SYS_LIB_STRIP' => 1,
'_AM_MANGLE_OPTION' => 1,
- 'AC_LTDL_SYMBOL_USCORE' => 1,
'AC_LIBTOOL_SYS_MAX_CMD_LEN' => 1,
+ 'AC_LTDL_SYMBOL_USCORE' => 1,
'AM_SET_DEPDIR' => 1,
- '_LT_CC_BASENAME' => 1,
'PKG_PROG_PKG_CONFIG' => 1,
+ '_LT_CC_BASENAME' => 1,
'_LT_LIBOBJ' => 1
}
], 'Autom4te::Request' )
);
Index: trunk/doc/stat_methods.pdf
===================================================================
Cannot display: file marked as a binary type.
svn:mime-type = application/pdf
Index: trunk/doc/stat_methods.tex
===================================================================
--- trunk/doc/stat_methods.tex (revision 101)
+++ trunk/doc/stat_methods.tex (revision 102)
@@ -1,3097 +1,3207 @@
\documentclass[12pt,titlepage]{article}
\usepackage{fullpage}
\usepackage{amsmath}
\usepackage{makeidx}
\usepackage{epsfig}
\usepackage[hyperfootnotes=false]{hyperref}
\newcommand{\sub}[1]{\ensuremath{_{\mbox{\scriptsize \,#1}}}}
\newcommand{\supers}[1]{\ensuremath{^{\mbox{\scriptsize #1}}}}
\newcommand{\cname}[1]{\index{#1}\textsf{#1}}
\newcommand{\E}{\ensuremath{\mathbb{E}}}
\newcommand{\V}{\ensuremath{\mathbb{V}}}
\newcommand{\R}{\ensuremath{\mathbb{R}}}
\newcommand{\etal}{\emph{et al.}}
\newcommand{\hf}{\hat{f}_N}
\newcommand{\xf}{x\sub{fit}}
\newcommand{\scriptname}[1]{\texttt{#1}}
\newcommand{\at}{\tilde{a}\sub{fit}}
\newcommand{\bt}{\tilde{b}\sub{fit}}
\makeindex
\renewcommand\indexname{Functions and Classes}
\newenvironment{thinlist} {
\begin{list} {---} {
\setlength{\topsep}{0.075cm}
\setlength{\parsep}{0.075cm}
\setlength{\itemsep}{0.075cm}
}
} {\end{list}}
\begin{document}
\title{Statistical Methods in the NPStat Package}
\author{I. Volobouev, {\it i.volobouev@ttu.edu}}
\date{Version: 2.2.0 \hspace{1.5cm} Date: \today}
\maketitle
\tableofcontents
\newpage
\section{Preliminaries}
\label{sec:preliminaries}
The NPStat package provides a C++ implementation of
several nonparametric
statistical modeling tools together with
various supporting code. Nonparametric modeling becomes very
useful when there is little prior information available to justify an
assumption that the data belongs to a certain parametric family of
distributions. Even though nonparametric techniques are usually
more computationally intensive than parametric ones,
judicious use of fast algorithms in combination with
increasing power of modern computers often results in very practical
solutions --- at least until the ``curse of dimensionality'' starts
imposing its heavy toll.
It will be assumed that the reader of this note is reasonably
proficient in C++ programming, and that the meaning
of such words as ``class'', ``function'', ``template'',
``method'', ``container'', ``namespace'', {\it etc.}
is obvious from their context.
The latest version of the package code can be downloaded from
\href{http://npstat.hepforge.org/}{http://npstat.hepforge.org/}
Most classes and functions implemented in NPStat are placed in the
``npstat'' namespace. A~few functions and classes that
need Minuit2~\cite{ref:minuit} installation
in order to be meaningfully used belong
to the ``npsi'' namespace. Such functions and
classes can be found in the ``interfaces''
directory of the package.
When a C++ function, class, or template is mentioned in the text
for the first time, the header file which contains its declaration
is often mentioned as well. If the header file is not mentioned,
it is most likely ``npstat/stat/NNNN.hh'', where NNNN stands
for the actual name of the class or function.
An expression ``$I(Q)$'' is frequently employed in the text,
where $Q$ is some logical proposition.
$I(Q)$ is the proposition indicator function
defined as follows: $I(Q) = 1$ if $Q$ is true
and $I(Q) = 0$ if $Q$ is false. For example, a product of $I(0 \le x \le 1)$
with some other mathematical function is often used to
denote some probability density supported
on the $[0, 1]$ interval. Note that $I(\neg Q) = 1 - I(Q)$, where
$\neg Q$ is the logical negation of $Q$.
\section{Data Representation}
\label{sec:datarep}
For data storage, the NPStat software relies on the object serialization
and I/O capabilities provided
by the C++ Geners package~\cite{ref:geners}.
Geners supports persistence of all standard library containers
(vectors, lists, maps, {\it etc.}) including those introduced
in the C++11 Standard~\cite{ref:cpp11} (arrays, tuples, unordered sets
and maps). In addition to data structures supported by Geners, NPStat
offers several other persistent containers useful in statistical
data analysis: multidimensional arrays, homogeneous $n$-tuples, and
histograms.
Persistent multidimensional arrays are implemented with the \cname{ArrayND}
template (header file ``npstat/nm/ArrayND.hh'').
This is a~reasonably full-featured array class, with support for
subscripting (with and without bounds checking), vector space operations
(addition, subtraction,
multiplication by a scalar), certain tensor operations (index contraction,
outer product), slicing, iteration over subranges, linear and cubic
interpolation of array values
to fractional indices, {\it etc.} Arrays whose maximum
number of elements is known at the compile time can be efficiently
created on the stack.
Homogeneous $n$-tuples are two-dimensional tables in which the number of
columns is fixed while the number of rows can grow dynamically.
The stored object type is chosen when the $n$-tuple is created
(it is a template parameter). For
homogeneous $n$-tuples, storage type is the same for every column
which allows for more efficient optimization of memory allocation
in comparison with heterogeneous $n$-tuples.\footnote{The Geners
serialization package supports
several types of persistent heterogeneous $n$-tuples including
std::vector$<$std::tuple$<$...$> >$,
RowPacker variadic template
optimized for accessing large data samples row-by-row,
and ColumnPacker variadic template
optimized for data access column-by-column.}
Two persistent template classes are provided for homogeneous $n$-tuples:
\cname{InMemoryNtuple} for $n$-tuples which can completely fit inside
the computer memory (header file ``npstat/stat/InMemoryNtuple.hh'')
and \cname{ArchivedNtuple} for $n$-tuples
which can grow as large as the available disk space
(header file ``npstat/stat/ArchivedNtuple.hh''). Both of these
classes inherit from the abstract class \cname{AbsNtuple} which
defines all interfaces relevant for statistical analysis of the data
these $n$-tuples contain. Thus \cname{InMemoryNtuple} and \cname{ArchivedNtuple}
are completely interchangeable for use in various data analysis algorithms.
Arbitrary-dimensional\footnote{To be precise,
the number of dimensions for both histograms
and multidimensional arrays can not exceed
31 on 32-bit systems and 63 on 64-bit systems. However, your computer will
probably run out of memory long before this limitation
becomes relevant for your data analysis.} persistent
histograms are implemented with the \cname{HistoND}
template (header file ``npstat/stat/HistoND.hh''). Both uniform and non-uniform
placing of bin edges can be created using
axis classes \cname{HistoAxis} and \cname{NUHistoAxis}, respectively,
as \cname{HistoND} template parameters. The \cname{DualHistoAxis} class
can be employed for histograms in which only a fraction of axes
must have non-uniform bins.
Two binning schemes are supported
for data sample processing.
In the normal scheme, the bin count is incremented if the point coordinates
fall inside the given bin (the \cname{fill} method of the class).
In the centroid-preserving scheme,
all bins in the neighborhood of the sample point
are incremented in such a way that the center of mass of bin
increments coincides exactly with the point location
(the \cname{fillC} method)\footnote{For one-dimensional histograms,
this is called ``linear binning''~\cite{ref:linearbinning}.
NPStat supports this technique for uniformly binned
histograms of arbitrary dimensionality.}.
The objects used as
histogram bin contents and the objects used as
weights added to the bins when
the histogram is filled are not required to have
anything in common; the only requirement is that a~meaningful
binary function (typically, operator+=) can be defined for them.
This level of abstraction allows, for example, for using vectors and tensors
as histogram bin contents or for implementing profile histograms with
the same \cname{HistoND} template. As the bin contents are implemented
with the \cname{ArrayND} class, all features of \cname{ArrayND}
(vector space operations, slicing, interpolation, {\it etc}) are available
for them as well. Overflows are stored in separate arrays with
three cells in each dimension: underflow, overflow, and the central part
covered by the regular histogram bins.
\section{Descriptive Statistics}
\label{sec:descriptivestats}
A number of facilities is included in the NPStat package for calculating
standard descriptive sample statistics such as mean, covariance matrix,
{\it etc}. Relevant functions, classes, and templates are described below.
\subsection{Functions and Classes for Use with Point Clouds}
\cname{arrayStats} (header file ``npstat/stat/arrayStats.hh'') --- This
function estimates the population mean ($\mu$), standard deviation ($\sigma$),
skewness ($s$),
and kurtosis ($k$) for a one-dimensional set of points. A numerically sound
two-pass algorithm is used.
% Internally, calculations are performed in long double precision.
The following formulae are implemented ($N$ is the number of points
in the sample):
\begin{equation}
\mu = \frac{1}{N} \sum_{i=0}^{N-1} x_i
\label{eq:samplemean}
\end{equation}
\begin{equation}
\sigma = \sqrt{\frac{1}{N - 1} \sum_{i=0}^{N-1} (x_i - \mu)^2}
\label{eq:samplestdev}
\end{equation}
\begin{equation}
s = \frac{N}{(N - 1) (N - 2) \sigma^3} \sum_{i=0}^{N-1} (x_i - \mu)^3
\end{equation}
\begin{equation}
g_2 = N \frac{\sum_{i=0}^{N-1} (x_i - \mu)^4}{\left( \sum_{i=0}^{N-1} (x_i - \mu)^2 \right)^2} - 3
\label{eq:kurtosisg2}
\end{equation}
\begin{equation}
k = \frac{N - 1}{(N - 2) (N - 3)} ((N + 1) g_2 + 6) + 3
\label{eq:kurtosis}
\end{equation}
Note that the population kurtosis estimate is defined in such a way that its
expectation value for a sample drawn from the Gaussian distribution equals
3 rather than 0. For more details on the origin of these formulae
consult Ref.~\cite{ref:sampleskewkurt}.
\cname{empiricalCdf} (header file ``npstat/stat/StatUtils.hh'') --- This
function evaluates the empirical cumulative distribution function (ECDF) for
a sample of points. The points must be sorted
in the increasing order by the user before
the function call is made.
The ECDF is defined as follows. Suppose, the $N$ sorted point values are
$\{x_0, x_1, ..., x_{N-1}\}$ and all $x_i$ are distinct ({\it i.e.,} there
are no ties). Then
\begin{equation}
\mbox{ECDF}(x) = \left\{ \begin{array}{ll}
0 & \mbox{if\ } x \le x_0, \\
\frac{1}{N} \left( \frac{x-x_i}{x_{i+1}-x_i}+i+1/2 \right) & \mbox{if\ } x_i < x \le x_{i+1} \ \mbox{and} \ x < x_{N-1}, \\
1 & \mbox{if\ } x \ge x_{N-1}.
\end{array}
\right.
\label{eq:ecdf}
\end{equation}
When all sample points are distinct, $\mbox{ECDF}(x)$ is discontinuous
at $x = x_0$ and $x = x_{N-1}$ and continuous for all other $x$.
If the sample has more than one point
with the same $x$, let say $x_k$ and
$x_{k+1}$ where $k > 0$ and $k < N - 2$, $x_k$ also becomes
a point of discontinuity.
$\mbox{ECDF}(x_k)$ will be set to $\frac{1}{N} (k + 1/2)$, the value
coincident with the function left limit.
\cname{empiricalQuantile} (header file ``npstat/stat/StatUtils.hh'') --- This
is the empirical quantile function, $\mbox{EQF}(y)$, which provides
an inverse to $\mbox{ECDF}(x)$. For any $x$ from the interval
$[x_0, x_{N-1}]$, $\mbox{EQF}(\mbox{ECDF}(x)) = x$ (up to numerical
round-off errors). If $x < x_0$ then $\mbox{EQF}(\mbox{ECDF}(x)) = x_0$ and if
$x > x_{N-1}$ then $\mbox{EQF}(\mbox{ECDF}(x)) = x_{N-1}$.
\cname{MultivariateSumAccumulator} and \cname{MultivariateSumsqAccumulator}
--- These classes are intended for calculating means and covariance matrices
for multivariate data samples in a~numerically stable manner. The
classes can be
used inside user-implemented cycles over sets of points or with
\cname{cycleOverRows} and other similar methods of the \cname{AbsNtuple}
class. Two passes over the data
are necessary for calculating the covariance matrix,
first with
\cname{MultivariateSumAccumulator} and second with
\cname{MultivariateSumsqAccumulator}. The multivariate
mean is initially found from
\begin{equation}
\overline{{\bf x}} = \frac{1}{N} \sum_{i=0}^{N-1} {\bf x}_i
\end{equation}
and then an estimate of the population covariance matrix is calculated as
\begin{equation}
V = \frac{1}{N - 1} \
\sum_{i=0}^{N-1} ({\bf x}_i - \overline{{\bf x}}) ({\bf x}_i - \overline{{\bf x}})^T
\label{eq:covar}
\end{equation}
\cname{MultivariateWeightedSumAccumulator} and
\cname{MultivariateWeightedSumsqAccumulator} --- These classes are
intended for calculating means and covariance matrices
for multivariate data samples in which points enter with non-negative
weights:
\begin{equation}
\overline{{\bf x}} = \frac{\sum_{i=0}^{N-1} w_i {\bf x}_i}{\sum_{i=0}^{N-1} w_i},
\label{eq:weightedmean}
\end{equation}
\begin{equation}
V = \frac{N_{eff}}{N_{eff} - 1} \
\frac{\sum_{i=0}^{N-1} w_i ({\bf x}_i - \overline{\bf x}) ({\bf x}_i - \overline{\bf x})^T}{\sum_{i=0}^{N-1} w_i},
\label{eq:weightedcov}
\end{equation}
where $N_{eff}$ is the effective number of entries in a weighted
sample\footnote{$N_{eff}$ is also known as the Kish's effective sample size. Note that this
estimate of $V$ makes sense only if the weights and point locations are statistically independent
from each other. If the weights are functions of the observed ${\bf x}_i$ then there
appears to be no general estimate that is independent of the weighting function.}:
\begin{equation}
N_{eff} = \frac{\left( \sum_{i=0}^{N-1} w_i \right)^2}{\sum_{i=0}^{N-1} w_i^2}.
\label{eq:neff}
\end{equation}
Two passes over the data are required, first with
\cname{MultivariateWeightedSumAccumulator} and then with
\cname{MultivariateWeightedSumsqAccumulator}.
\cname{StatAccumulator} --- Persistent class which determines
the minimum, the maximum, the mean, and the standard
deviation of a~sample of values using a single-pass algorithm.
It can be used when the two-pass calculation
is either impossible or impractical for
some reason. This class can be conveniently employed as
a bin content type for \cname{HistoND} (this turns \cname{HistoND}
into a profile histogram).
\cname{StatAccumulator}
maintains a running average which is updated
after accumulating $2^K$ points, $K = 0, 1, 2, ...$ The
numerical precision of the standard deviation determined
in this manner is not as good as in the two-pass method but it
is usually much better than that expected
from a~naive implementation which simply accumulates sample moments about 0
and then evaluates $\sigma^2$ according to
$\frac{N}{N - 1} (\overline{{\bf x}^2} - {\overline{\bf x}}^2)$ (this formula
can suffer from a severe subtractive cancellation problem).
\cname{WeightedStatAccumulator} --- Similar class for calculating
mean and standard deviation of a~sample of weighted values using
a single-pass algorithm.
\cname{StatAccumulatorArr} --- This class provides functionality similar
to an array of \cname{StatAccumulator} objects but with a more convenient
interface for accumulating sample values (cycling over the input values
is performed by the class itself).
\cname{StatAccumulatorPair} --- Two \cname{StatAccumulator} objects
augmented by the sum of cross terms which allows for subsequent
calculation of covariance. The covariance is estimated using
a formula which can potentially suffer from subtractive
cancellation\footnote{This problem is partially alleviated
by using long double type for internal calculations.}.
Therefore, this class should not be used to process large samples of points in
which, for one of the variables or for both of them,
the absolute value of the mean can be much larger than the
standard deviation. This class is persistent.
\cname{CrossCovarianceAccumulator} --- To accumulate the sums for
covariance matrix calculation according to formula~\ref{eq:covar}
in $d$-dimensional space,
one has to keep track of $d (d + 1)/2$ sums of squares and cross terms.
Sometimes only a small part of the complete covariance matrix is of interest.
In this case $d$ can often be split into subspaces of dimensionalities
$d_1$ and $d_2$, $d = d_1 + d_2$, so that the interesting part of
covariance matrix is dimensioned $d_1 \times d_2$ (plus $d$ diagonal terms).
For large values
of $d$, accumulating $d + d_1 \times d_2$ sums instead of $d (d + 1)/2$
can mean big difference in speed and memory consumption, especially
in case $d_1 \ll d_2$ (or $d_1 \gg d_2$).
The \cname{CrossCovarianceAccumulator} allows
its users to do just that: it accumulates cross terms between two
arrays with $d_1$ and $d_2$ elements but not the cross terms between
elements of the same array. The covariances are estimated by a single-pass
method using a~formula which can suffer from subtractive
cancellation\footnotemark[5]. Use with care.
\cname{SampleAccumulator} --- This class stores all values
it gets in an internal buffer. Whenever mean, standard deviation,
or some quantile function values are needed for the accumulated sample,
they are calculated from the stored values using numerically sound techniques.
Mean and standard deviation are calculated according to
Eqs.~\ref{eq:samplemean} and~\ref{eq:samplestdev}.
Covariance and correlations can be calculated for a pair of
\cname{SampleAccumulator} objects with the same number of stored elements
by corresponding methods of the class.
Empirical CDF and empirical quantile calculations
are handled by calling \cname{empiricalCdf} and \cname{empiricalQuantile}
functions internally, preceded by data sorting.
This class is persistent and can be used as the bin content template
parameter for \cname{HistoND}.
\cname{WeightedSampleAccumulator} --- Similar to SampleAccumulator,
but intended for calculating various statistics for samples
of weighted points.
Kendall's $\tau$ and Spearman's $\rho$
rank correlation coefficients can be estimated
by functions \cname{sampleKendallsTau} and \cname{sampleSpearmansRho},
respectively. These functions are declared in the header files
``npstat/stat/kendallsTau.hh'' and ``npstat/stat/spearmansRho.hh'',
respectively.
\subsection{Functions for Use with Binned Data}
\cname{histoMean} (header file ``npstat/stat/histoStats.hh'') --- This
function calculates the histogram center of mass according to
Eq.~\ref{eq:weightedmean} with bin contents
used as weights for bin center coordinates.
Note that the code of this function does not enforce non-negativity of all bin
values, and the result of~\ref{eq:weightedmean} will not necessarily
make a lot of sense in case negative weights are present.
In order to check that all bins are non-negative
and that there is at least one positive bin, you can call the \cname{isDensity}
method of the \cname{ArrayND} class on the histogram
bin contents. The \cname{histoMean}
function is standalone and not a member of the \cname{HistoND} class
because Eq.~\ref{eq:weightedmean} is not going to make sense for all
possible bin types.
\cname{histoCovariance} (header file ``npstat/stat/histoStats.hh'') --- This
function estimates the population covariance matrix for histogrammed data
according to Eq.~\ref{eq:weightedcov}, with bin contents
used as weights for bin center coordinates.
As for \cname{histoMean}, results
produced by \cname{histoCovariance}
will not make much sense in case negative bin values are present.
\cname{arrayCoordMean} (header file ``npstat/stat/arrayStats.hh'') --- Similar
to \cname{histoMean} but the bin values are provided in an \cname{ArrayND}
object and bin locations are specified by additional arguments.
Only uniform bin spacing is supported by this function
(unlike \cname{histoMean} which simply gets its bin centers from
the argument histogram).
\cname{arrayCoordCovariance} (header file ``npstat/stat/arrayStats.hh'')
--- Similar to \cname{histoCovariance} but the bin values are provided
in an \cname{ArrayND} object and bin locations are specified by additional
arguments. Uniform binning only.
\cname{arrayShape1D} (header file ``npstat/stat/arrayStats.hh'') --- Estimates
population mean, standard deviation, skewness, and kurtosis for
one-dimensional histogrammed data with equidistant bins.
Bin values $b_i$ used as weights are provided in an \cname{ArrayND}
object and bin locations $x_i$ are specified by additional arguments.
The mean and the standard deviations squared
are calculated according to Eqs.~\ref{eq:weightedmean}
and~\ref{eq:weightedcov} reduced to one dimension.
The skewness is estimated as
\begin{equation}
s = \frac{N_{eff}^2}{(N_{eff} - 1) (N_{eff} - 2) \sigma^3} \
\frac{\sum_{i=0}^{N_b-1} b_i (x_i - \mu)^3}{\sum_{i=0}^{N_b-1} b_i},
\end{equation}
with $N_{eff}$ defined by Eq.~\ref{eq:neff}. The kurtosis is found from Eq.~\ref{eq:kurtosis} in which $N$
is replaced by $N_{eff}$ and, instead of Eq.~\ref{eq:kurtosisg2},
$g_2$ is determined from
\begin{equation}
g_2 = \sum_{i=0}^{N_b-1} b_i \ \frac{\sum_{i=0}^{N_b-1} b_i (x_i - \mu)^4}{\left( \sum_{i=0}^{N_b-1} b_i (x_i - \mu)^2 \right)^2} - 3.
\end{equation}
\cname{arrayQuantiles1D} (header file ``npstat/stat/arrayStats.hh'')
--- This function treats
one-dimensional arrays as histograms and determines the values
of the quantile function for a~given set of cumulative density values
(after normalizing the histogram area to 1). Each bin is considered
to have uniform probability density between its edges. If you need
to perform this type of calculation more than once with the same
array, it will be more efficient to construct a \cname{BinnedDensity1D}
object and then to use its \cname{quantile} method instead of calling
the \cname{arrayQuantiles1D} function multiple times.
\cname{arrayEntropy} (header file ``npstat/stat/arrayStats.hh'')
--- This function calculates
\begin{equation}
\label{eq:entropy}
H = -\frac{1}{N_b} \sum_{i=0}^{N_b-1} b_i \ln b_i
\end{equation}
This formula is appropriate for arrays that tabulate probability density
values on uniform grids. To convert the result into the actual entropy
of the density, multiply it by the volume of the distribution support.
\subsection{ArrayND Projections}
The following set of classes works with \cname{ArrayND} class
and assists in making array ``projections''.
Projections reduce array dimensionality
by calculating certain array properties over projected indices.
For example, one might want to create a one-dimensional array, $a$,
from a three-dimensional array, $b$, by summing all array elements
with the given second index: $a_j = \sum_{i, k} b_{ijk}$ (this means
that the first and the third indices of the array are ``projected''). This
and other similar operations can be performed by the \cname{project}
method of the \cname{ArrayND} class. What is done with the
array elements when the projection is performed can be specified
by the ``projector'' argument of the \cname{project} method. In particular,
this argument can be an object which performs some statistical
analysis of the projected elements.
\cname{ArrayMaxProjector} (header file ``npstat/stat/ArrayProjectors.hh'')
--- For each value of the array indices which are not projected,
finds the maximum array element for all possible values of
projected indices.
\cname{ArrayMinProjector} (header file ``npstat/stat/ArrayProjectors.hh'')
--- Finds the minimum array element for all possible values of
projected indices.
\cname{ArraySumProjector} (header file ``npstat/stat/ArrayProjectors.hh'')
--- Sums all array values in the iteration over projected indices.
\cname{ArrayMeanProjector} (header file ``npstat/stat/ArrayProjectors.hh'')
--- Calculates the mean array value over projected indices.
\cname{ArrayMedianProjector} (header file ``npstat/stat/ArrayProjectors.hh'')
--- Calculates the median array value over projected indices.
\cname{ArrayStdevProjector} (header file ``npstat/stat/ArrayProjectors.hh'')
--- Calculates the standard deviation of array values encountered
during the iteration over projected indices.
\cname{ArrayRangeProjector} (header file ``npstat/stat/ArrayProjectors.hh'')
--- Calculates the ``range'' of array values over projected indices.
``Range'' is defined here as the difference between 75\supers{th} and
25\supers{th} percentiles of the sample values divided by the
distance between 75\supers{th} and 25\supers{th} percentiles of
the Gaussian distribution with $\sigma = 1$. Therefore, ``range'' can
be used as a robust estimate of the standard deviation.
All ``projectors'' are derived either from \cname{AbsArrayProjector} class
in case the calculated quantity depends on the array indices
(header file ``npstat/nm/AbsArrayProjector.hh'')
or from \cname{AbsVisitor} class in case it is sufficient
to know just the element values (header file ``npstat/nm/AbsVisitor.hh'').
Naturally, the user
can supply his/her own projector implementations derived
from these base classes for use
with the the \cname{project} method of the \cname{ArrayND} class.
\section{Statistical Distributions}
\label{sec:distros}
A number of univariate and multivariate statistical distributions
is supported by the NPStat package. All implementations of
univariate continuous distributions share a common interface and can be
used interchangeably in a variety of statistical algorithms.
A similar approach has been adopted for univariate discrete and
multivariate continuous distributions.
\subsection{Univariate Continuous Distributions}
All classes which represent univariate continuous distributions
inherit from the \cname{AbsDistribution1D} abstract base class. These classes
must implement methods \cname{density} (probability density function),
\cname{cdf} (cumulative distribution function), \cname{exceedance}
(exceedance probability is just 1 $-$ cdf, but direct application
of this formula is often
unacceptable in numerical calculations due to subtractive cancellation),
and \cname{quantile} (the quantile function, inverse of cdf). For
a large number of statistical distributions, the probability density
function looks like $\frac{1}{\sigma}\, p\left( \frac{x - \mu}{\sigma} \right)$, where
$\mu$ and $\sigma$ are the distribution location and scale parameters, respectively.
Distributions of this kind should inherit from the base class
\cname{AbsScalableDistribution1D} which handles proper
scaling and shifting of the argument (this base class itself inherits
from \cname{AbsDistribution1D}, and it is declared in the same
header file ``npstat/stat/AbsDistribution1D.hh'').
Some of the standard univariate continuous distributions implemented
in NPStat are listed in Table~\ref{table:distros1d}. A few special
distributions less frequently encountered in the statistical
literature are described in more detail in the following subsections.
Of course, user-developed classes inheriting from \cname{AbsDistribution1D}
or \cname{AbsScalableDistribution1D} can also be employed with all
NPStat functions and classes that take an instance of
\cname{AbsDistribution1D} as one of parameters.
\begin{table}[ht!]
\caption{Continuous univariate distributions included in NPStat.
$P_n(x)$ are the Legendre polynomials.
Parameters $\mu$ and $\sigma$ are not shown for scalable distributions.
When not given explicitly, the normalization constant $\cal{N}$
ensures that
$\int_{-\infty}^{\infty} p(x) dx = 1$. Most of the classes
listed in this table are declared in the header file
``npstat/stat/Distributions1D.hh''. If the distribution
is not declared in that header then it has a dedicated header
with the same name, {\it e.g.}, ``npstat/stat/TruncatedDistribution1D.hh''
for \cname{TruncatedDistribution1D}.}
\label{table:distros1d}
\begin{center}
\noindent\begin{tabular}{|c|c|c|} \hline
Class Name & $p(x)$ & Scalable? \\ \hline\hline
\cname{Uniform1D} & $I(0 \le x \le 1)$ & yes \\ \hline
\cname{Exponential1D} & $e^{-x} I(x \ge 0)$ & yes \\ \hline
\cname{Quadratic1D} & $(1 + a P_1(2 x - 1) + b P_2(2 x - 1)) \,I(0 \le x \le 1)$ & yes \\ \hline
\cname{LogQuadratic1D} & ${\cal{N}} \exp (a P_1(2 x - 1) + b P_2(2 x - 1)) \,I(0 \le x \le 1)$ & yes \\ \hline
\cname{Gauss1D} & $\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}$ & yes \\ \hline
\cname{GaussianMixture1D} & $\frac{1}{\sqrt{2 \pi}} \sum_i \frac{w_i}{\sigma_i} e^{-(x-\mu_i)^2/(2 \sigma_i^2)}, \ \mbox{where} \ \sum_i w_i = 1$ & yes \\ \hline
\cname{TruncatedGauss1D} & $\frac{\cal{N}}{\sqrt{2 \pi}} e^{-x^2/2} \, I(-n_{\sigma} \le x \le n_{\sigma})$ & yes \\ \hline
\cname{SymmetricBeta1D} & ${\cal{N}} \, (1 - x^2)^p \, I(-1 < x < 1)$ & yes \\ \hline
\cname{Beta1D} & ${\cal{N}} \, x^{\alpha - 1} (1 - x)^{\beta - 1} \, I(0 < x < 1)$ & yes \\ \hline
\cname{Huber1D} & ${\cal{N}} \, [e^{-x^2/2} I(|x| \le a) + e^{\, a (a/2 - |x|)} (1 - I(|x| \le a))] $ & yes \\ \hline
\cname{Cauchy1D} & $\pi^{-1} (1 + x^2)^{-1}$ & yes \\ \hline
\cname{StudentsT1D} & ${\cal{N}} \, (1 + x^2/N_{dof})^{-(N_{dof} + 1)/2}$ & yes \\ \hline
\cname{Tabulated1D} & \begin{minipage}{0.52\linewidth}
\vskip1mm
Defined by a table of equidistant values on
the [0, 1] interval, interpolated by
a~polynomial (up to cubic). The first table
point is at $x = 0$ and the last is at $x = 1$.
Normalization is computed automatically.
\end{minipage} & yes \\ \hline
\cname{BinnedDensity1D} & \begin{minipage}{0.52\linewidth}
\vskip1mm
Defined by a table of $N$ equidistant values
on the [0, 1] interval with optional linear
interpolation. The first table point is at
$x = 1/(2 N)$ and the last is at $x = 1 - 1/(2 N)$.
Useful for converting 1-d histograms into distributions.
\end{minipage} & yes \\ \hline
\cname{QuantileTable1D} & \begin{minipage}{0.52\linewidth}
\vskip1mm
Defined by a table of $N$ equidistant quantile function values
on the [0, 1] interval with linear interpolation between these values
(so that density looks like a histogram with equal area bins).
The first table point is at
$x = 1/(2 N)$ and the last is at $x = 1 - 1/(2 N)$.
Useful for converting data samples into distributions
by sampling empirical quantiles.
\end{minipage} & yes \\ \hline
\cname{LeftCensoredDistribution} & $f \, p\sub{other}(x) + (1 - f) \, \delta(x - x_{-\infty})$ & no \\ \hline
\cname{RightCensoredDistribution} & $f \, p\sub{other}(x) + (1 - f) \, \delta(x - x_{+\infty})$ & no \\ \hline
\cname{TruncatedDistribution1D} & ${\cal{N}} \, p\sub{other}(x) \, I(x\sub{min} \le x \le x\sub{max})$ & no \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Johnson Curves}
The density functions of the Johnson
distributions~\cite{ref:johnson, ref:johnsonbook, ref:hahnshap}
are defined as follows:
\begin{equation}
S_{U}\ \mbox{(unbounded)}:\ \ p(x) = \frac{\delta}{\lambda \sqrt{2 \pi \left(1 +
\left(\frac{x - \xi}{\lambda}\right)^{2}\right)}} \,e^{-\frac{1}{2}
\left(\gamma + \delta\,\mbox{\scriptsize sinh}^{-1}\left(\frac{x-\xi}
{\lambda}\right)\right)^{2}}
\label{eq:johnsu}
\end{equation}
\begin{equation}
S_{B}\ \mbox{(bounded)}:\ \ \ p(x) = \frac{\delta}{\lambda \sqrt{2 \pi}
\left(\frac{x - \xi}{\lambda}\right)\left(1 - \frac{x - \xi}{\lambda}\right)}
\,e^{-\frac{1}{2}\left( \gamma + \delta \log \left( \frac{x-\xi}
{\xi + \lambda - x}\right) \right)^{2}} I(\xi < x < \xi + \lambda)
\label{eq:johnsb}
\end{equation}
They are related to the normal distribution, $N(\mu, \sigma)$,
by simple variable
transformations. Variable $z$ distributed according to $N(0, 1)$
can be
obtained from Johnson's variates $x$ by transformation
$z = \gamma + \delta f\left(\frac{x-\xi}{\lambda}\right)$:
\begin{displaymath}
\begin{array}{rlll}
f(y) = & \mbox{sinh}^{-1}(y) & & S_{U}\ \mbox{curves}\\
f(y) = & \log \left(\frac{y}{1-y}\right) & (0 < y < 1) & S_{B}\ \mbox{curves}
\end{array}
\end{displaymath}
Both densities become arbitrarily close to the lognormal density (for which
variable $z = \gamma + \delta \log (x - \xi)$ is normally distributed)
in the limit $\gamma \rightarrow \infty$
and to the normal distribution in the limit $\delta \rightarrow \infty$
($\xi$ and $\lambda$ have to be adjusted accordingly). Johnson's
parameterization of the lognormal density is
\begin{equation}
p(x) = \frac{\delta}{\sqrt{2 \pi} (x - \xi)} e^{-\frac{1}{2} (\gamma + \delta \log (x - \xi))^2} I(x > \xi)
\end{equation}
Together with their limiting cases, Johnson's $S_{U}$ and $S_{B}$
distributions attain {\it all possible values of skewness and kurtosis.}
Unfortunately, parameters $\xi, \lambda, \gamma$, and $\delta$
of $S_{U}$ and $S_{B}$ in
Eqs.~\ref{eq:johnsu} and~\ref{eq:johnsb} have no direct
relation to each other.
Crossing the lognormal boundary requires a discontinuous change in
the parameter values which is
rather inconvenient for practical data fitting purposes. This
problem can be alleviated by reparameterizing the functions
in terms of mean $\mu$, standard deviation $\sigma$,
skewness $s$, and kurtosis $k$, so that the corresponding
curve type and the original parameters $\xi, \lambda, \gamma, \delta$
are determined numerically. Examples of Johnson's density functions
are shown in Fig~\ref{johns_exa}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\textwidth]{johnson_examples.pdf}
\caption{ Black: $s = 0, ~k = 3$, Gaussian. Red: $s = -1.5, ~k = 10$, $S_{U}$.
Blue: $s = 0.5, ~ k = 2$, $S_{B}$.
For all three curves, the mean is 0 and the standard deviation is 1.}
\label{johns_exa}
\end{center}
\end{figure}
Johnson's $S_{U}$ and $S_{B}$ curves are implemented in NPStat with classes
\cname{JohnsonSu} and \cname{JohnsonSb}, respectively (header file
``npstat/stat/JohnsonCurves.hh''). Both of these
distributions are parameterized by $\mu$, $\sigma$, $s$, and $k$,
with an automatic internal conversion into $\xi, \lambda, \gamma, \delta$.
An original algorithm was developed to perform this conversion,
based in part on ideas from Refs.~\cite{ref:draper, ref:hill}.
The lognormal distribution parameterized by $\mu$, $\sigma$, and $s$
is implemented by the \cname{LogNormal} class (header file
``npstat/stat/Distributions1D.hh''). The class \cname{JohnsonSystem}
(header file ``npstat/stat/JohnsonCurves.hh'') can be used when automatic
switching between $S_{U}$, $S_{B}$,
lognormal, and Gaussian distributions is desired.
\subsection{Composite Distributions}
Composite distributions are built out of two or more
component distributions. One of these component distributions is
arbitrary while
all others must have a density supported on the interval
[0, 1]. Suppose, $G_k(x), \ k = 1, 2, 3, ...$ are cumulative
distribution functions with corresponding densities $g_i(x)$
proportional to $I(0 \le x \le 1)$.
Then, if $H(x)$ is a cumulative distribution function
with density $h(x)$, $F_1(x) = G_1(H(x))$ is also a cumulative
distribution function with density $f_1(x) = h(x) \, g_1(H(x))$.
Similarly, $F_2(x) = G_2(F_1(x))$ is a cumulative distribution with density
$f_2(x) = f_1(x) g_2(F_1(x)) = h(x) \, g_1(H(x)) g_2(G_1(H(x)))$.
We can now construct $F_3(x) = G_3(F_2(x))$ and so on. This sequence can
be terminated after an arbitrary number of steps.
Note that $f_1(x) = h(x)$ in case $g_1(x)$ is a uniform
probability density. Small deviations from uniformity in $g_1(x)$
will lead to corresponding small changes in $f_1(x)$.
The resulting density model can be quite flexible, even if
the component distributions $H(x)$ and $G_1(x)$ are
simple. Therefore, data samples with complicated sets
of features can be modeled as follows: construct
an approximate model $h(x)$ first, even if it does not fit
the sample quite right.
Then transform the data points $x_i$ to the [0, 1] interval
according to $y_i = H(x_i)$. The density of
points $y_i$\footnote{This density is called ``relative density''
in the statistical literature~\cite{ref:handcock}. Note
that $h(x)$ should be selected in such a way that the ratio between
the unknown population density of the sample under study and $h(x)$ should
be bounded for all $x$. If it is not, the relative density
will usually be unbounded, and its subsequent representation
by polynomials or log-polynomials will not lead to a consistent
estimate. Johnson curves often work reasonably well as $h(x)$.}
can now be
fitted to another parametric distribution (including composite one)
or it can be modeled by nonparametric techniques~\cite{ref:kdetransform}.
Due to the elimination of the boundary bias, the LOrPE density
estimation method described in Section~\ref{sec:lorpe}
becomes especially useful in the latter approach.
Once the appropriate $G(y)$ is constructed (parametrically
or not), the resulting composite density will provide
a good fit to the original set of points $x_i$\footnote{There is,
of course, a close connection between this density modeling
approach and a number of goodness-of-fit techniques based on
the comparison of the empirical cdf
with the cdf of the fitted density. For example, using
Legendre polynomials to model $\log (g_1(x))$, as in the
\cname{LogQuadratic1D} density, directly
improves the test criterion used in the Neyman's
smooth test for goodness-of-fit~\cite{ref:thas, ref:smoothtest}.}.
The composite distributions are implemented in NPStat
with the \cname{CompositeDistribution1D} class.
One composite distribution can be used to construct
another, thus allowing for composition chains of
arbitrary length, as described at the beginning of
this subsection. Several pre-built distributions of this type
are included in the NPStat package (they are declared in the header file
``npstat/stat/CompositeDistros1D.hh''). Flexible
models potentially capable of fitting wide varieties
of univariate data samples are implemented by the \cname{JohnsonLadder}
and \cname{BinnedCompositeJohnson} classes. In both of these,
Johnson curves are used as $H(x)$. \cname{JohnsonLadder}
takes an arbitrarily long sequence of parametric \cname{LogQuadratic1D}
distributions for $G_k(x)$ while \cname{BinnedCompositeJohnson}
is using a single nonparametric \cname{BinnedDensity1D} as $G_1(x)$.
\subsection{Univariate Discrete Distributions}
Classes which represent univariate discrete distributions
inherit from the \cname{AbsDiscreteDistribution1D} abstract base class.
The interface defined by this base class differs in a number of ways
from the \cname{AbsDistribution1D} interface.
Instead of the method \cname{density}
used with arguments of type ``double'',
discrete distributions have the method \cname{probability} defined for
the arguments of type ``long int''.
The methods \cname{cdf} and \cname{exceedance}
have the same signatures as corresponding methods of continuous distributions,
but the \cname{quantile} function is returning long integers. The
method \cname{random} generates long integers as well.
Discrete univariate distributions which can be trivially shifted should
inherit from the \cname{ShiftableDiscreteDistribution1D} base class which
handles the shift operation. There is, however, no operation analogous
to scaling of continuous distributions.
Univariate discrete distributions implemented
in NPStat are listed in Table~\ref{table:discrdistros1d}.
\begin{table}[ht!]
\caption{Discrete univariate distributions included in NPStat.
The location parameter is not shown explicitly for shiftable distributions.
Classes listed in this table are declared in the header file
``npstat/stat/DiscreteDistributions1D.hh''.}
\label{table:discrdistros1d}
\begin{center}
\noindent\begin{tabular}{|c|c|c|} \hline
Class Name & $p(n)$ & Shiftable? \\ \hline\hline
\cname{DiscreteTabulated1D} & \begin{minipage}{0.52\linewidth}
\vskip1mm
Defined by a table of probability values.
Normalization is computed automatically.
\end{minipage} & yes \\ \hline
\cname{Poisson1D} & $\frac{\lambda^n}{n!} e^{-\lambda}$ & no \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Multivariate Continuous Distributions}
All classes which represent multivariate continuous distributions
inherit from the \cname{AbsDistributionND} abstract base class.
These classes must implement methods \cname{density} (probability
density function) and \cname{unitMap} (mapping from the unit
$d$-dimensional cube, $U_d$, into the density support region).
Densities that can be shifted and scaled in each coordinate separately
should be derived from the \cname{AbsScalableDistributionND} base class
(which itself inherits from \cname{AbsDistributionND}).
Classes representing densities that look like $p({\bf x}) = \prod q(x_i)$,
where $q(x)$ is some one-dimensional density,
should be derived from the \cname{HomogeneousProductDistroND} base class
(the three base classes just mentioned
are declared in the ``npstat/stat/AbsDistributionND.hh'' header file).
Simple classes inheriting from \cname{AbsDistributionND}
are listed in Table~\ref{table:distrosnd}.
\begin{table}[ht!]
\caption{Continuous multivariate distributions included in NPStat.
These distributions are predominantly intended for use as multivariate
density estimation kernels.
Shifts and scale factors are not shown for scalable distributions.
Here, ``scalability'' means
the ability to adjust the shift and scale parameters in each
dimension, not the complete bandwidth matrix.
When not given explicitly, the normalization constant $\cal{N}$
ensures that
$\int p({\bf x}) d{\bf x} = 1$. All of these distributions
are declared in the header file ``npstat/stat/DistributionsND.hh''
with exception of \cname{ScalableGaussND} which has its own header.}
\label{table:distrosnd}
\begin{center}
\noindent\begin{tabular}{|c|c|c|} \hline
Class Name & $p({\bf x})$ & Scalable? \\ \hline\hline
\cname{ProductDistributionND} & $\prod_{i=1}^{d} p_i(x_i)$ &
\begin{minipage}{0.13\linewidth}
depends on components
\end{minipage} \\ \hline
\cname{UniformND} & $\prod_{i=1}^{d} I(0 \le x_i \le 1)$ & yes \\ \hline
\cname{ScalableGaussND} & $(2 \pi)^{-d/2} e^{-|{\bf x}|^2/2}$ & yes \\ \hline
\cname{ProductSymmetricBetaND} & ${\cal{N}} \prod_{i=1}^{d} (1 - x_i^2)^p I(-1 < x_i < 1)$ & yes \\ \hline
\cname{ScalableSymmetricBetaND} & ${\cal{N}} (1 - |{\bf x}|^2)^p I(|{\bf x}| < 1)$ & yes \\ \hline
\cname{ScalableHuberND} & ${\cal{N}} \, [e^{-|{\bf x}|^2/2} I(|{\bf x}| \le a) + e^{\, a (a/2 - |{\bf x}|)} (1 - I(|{\bf x}| \le a))]$ & yes \\ \hline
\cname{RadialProfileND} & \begin{minipage}{0.52\linewidth}
\vskip1mm
Arbitrary centrally-symmetric density.
Defined by its radial profile:
a table of equidistant values on
the [0, 1] interval, interpolated by
a~polynomial (up to cubic). The first table
point is at $|{\bf x}| = 0$ and the last is at
$|{\bf x}| = 1$. For $|{\bf x}| > 1$ the density is 0.
Normalization is computed automatically.
\end{minipage} & yes \\ \hline
\cname{BinnedDensityND} & \begin{minipage}{0.52\linewidth}
\vskip1mm
Defined by a table of values
on $U_d$, equidistant in each dimension,
with optional multilinear
interpolation. In each dimension,
the first table point is at
$x_i = 1/(2 N_i)$ and the last is at
$x_i = 1 - 1/(2 N_i)$.
Useful for converting multivariate
histograms into distributions.
\end{minipage}
& yes \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Copulas}
For any continuous multivariate density $p({\bf x}|{\bf a})$
in a $d$-dimensional space $X$ of random variables ${\bf x} \in X$
depending on a vector of parameters ${\bf a}$,
we define $d$ marginal densities $p_i(x_i|{\bf a})$, $i = 1, \ldots, d$ by
$$
p_i(x_i|{\bf a}) \equiv \int p(x_1, \ldots, x_d |{\bf a}) \mathop{\prod_{j=1}^d}_{j \ne i} d x_j
$$
with their corresponding cumulative distribution functions
$$
F_i(x_i|{\bf a}) \equiv \int_{-\infty}^{x_i} p_i(\tau|{\bf a}) d \tau .
$$
For each point ${\bf x} \in X$, there is a corresponding
point ${\bf y}$ in a unit $d$-dimensional
cube $U_d$ such that $y_i(x_i) \equiv F_i(x_i|{\bf a})$, $i = 1, \ldots, d$. The {\it copula density}
is defined on $U_d$ by
\begin{equation}
c({\bf y}({\bf x})|{\bf a}) \equiv \frac{p({\bf x}|{\bf a})}{\prod_{i=1}^{d} p_i(x_i|{\bf a})}.
\label{eq:copuladef}
\end{equation}
Copula density (as well as its corresponding multivariate
distribution function $C({\bf y}|{\bf a})$, or just {\it copula})
contains all information
about mutual dependence of individual variables $x_i$.
It can be shown that all copula marginals are uniform and, conversely,
that any distribution on $U_d$ whose marginals are uniform
is a copula~\cite{ref:copulabook}.
Naturally, when the copula and the marginals of some multivariate density
are known, the density itself is expressed by
\begin{equation}
p({\bf x}|{\bf a}) = c({\bf y}({\bf x})|{\bf a}) \prod_{i=1}^{d} p_i(x_i|{\bf a}).
\end{equation}
Note that $c({\bf y}|{\bf a}) = 1$ for all ${\bf y}$ iff all $x_i$
are independent and the density $p({\bf x}|{\bf a})$ is fully
factorizable.
The NPStat package allows its users to model multivariate continuous
distributions using copula and marginals with the aid of
\cname{CompositeDistributionND} class. An object of this class is
normally constructed out of user-provided copula and marginals.
\cname{CompositeDistributionND} object can also be constructed
from histogram bins. Several standard copulas are
implemented: Gaussian, Student's-$t$, and
Farlie-Gumbel-Morgenstern~\cite{ref:copulabook}.
The corresponding class names are \cname{GaussianCopula},
\cname{TCopula}, and \cname{FGMCopula} (these classes are
declared in the header file ``npstat/stat/Copulas.hh'').
Empirical multivariate copulas densities can be constructed
as follows. Let's assume that there are no coincident
point coordinates in each dataset variable. We can sort all
$N$ elements of the data set in the increasing order in each coordinate
separately and assign to each data point $i$ a multi-index
$\{m_{i0}, ..., m_{id}\}$. For each dimension $k$, $m_{ik}$
represents the number of the point $i$ in the sequence ordered
by the dimension $k$ coordinate, so that
$0 \le m_{ik} < N$. The empirical copula density, $\mbox{ECD}({\bf x})$,
is then defined by
\begin{equation}
\mbox{ECD}({\bf x}) = \sum_{i=0}^{N-1} \prod_{k=0}^{d-1} \delta\left(x_k - \frac{m_{ik} + 1/2}{N}\right),
\end{equation}
where $\delta(x)$ is the Dirac delta function. To get a better idea about
locations of these delta functions, think of
an $N \times N \times ... \times N$ uniform grid in $d$ dimensions on
which $N$ points are placed in such a way that no two
points share the same coordinate in any of the dimensions.
In two dimensions, this is like the placement of chess pieces
in the eight queens puzzle~\cite{ref:eightqueens}
which is simplified to use rooks instead of queens~\cite{ref:eightrooks}.
In the NPStat package, empirical copula densities can be approximated by
histograms defined on $U_d$. Construction of empirical copula histograms
can be performed by functions \cname{empiricalCopulaHisto}
(header file ``npstat/stat/empiricalCopulaHisto.hh'') and
\cname{empiricalCopulaDensity}
(header file ``npstat/stat/empiricalCopula.hh''). The integrated empirical copulas can be
constructed on uniform grids by the \cname{calculateEmpiricalCopula}
function (header ``npstat/stat/empiricalCopula.hh'').
The Spearman's rank correlation coefficient, $\rho$, can
be estimated from two-dimensional empirical copulas using functions
\cname{spearmansRhoFromCopula} and \cname{spearmansRhoFromCopulaDensity}
declared in the header file ``npstat/stat/spearmansRho.hh''.
These functions evaluate the following integrals numerically:
\begin{align}
\rho &= 12 \int_0^1 \int_0^1 C({\bf y}|{\bf a}) dy_0 dy_1 - 3 & \mbox{\ } & \mbox{used by \textsf{spearmansRhoFromCopula}} \\
\rho &= 12 \int_0^1 \int_0^1 c({\bf y}|{\bf a}) y_0 y_1 dy_0 dy_1 - 3 & \mbox{\ } & \mbox{used by \textsf{spearmansRhoFromCopulaDensity}}
\end{align}
While these two formulas are identical mathematically~\cite{ref:copulabook},
the approximations used
in numerical integration and differentiation will usually lead to slightly
different results returned by these functions for some copula and its
corresponding density.
The Kendall's rank correlation coefficient, $\tau$,
can be estimated from the empirical copulas using the
function \cname{kendallsTauFromCopula} declared in
the header ``npstat/stat/kendallsTau.hh''.
This function evaluates the following formula numerically:
\begin{equation}
\tau = 4 \int_0^1 \int_0^1 C({\bf y}|{\bf a}) c({\bf y}|{\bf a}) dy_0 dy_1 - 1.
\end{equation}
The mutual information between variables can be estimated from
copula densities represented on uniform grids according
to Eq.~\ref{eq:entropy}\footnote{Mutual information is
simply the negative of the copula entropy~\cite{ref:copulaentropy}.}.
Decomposition of multivariate densities into the marginals and the
copula is useful not only for a subsequent analysis of mutual
dependencies of the variates but also for implementing nonparametric
density interpolation, as described in the next section.
\section{Nonparametric Interpolation of Densities}
\label{sec:densityinterpolation}
There are numerous problems in High Energy Physics in which construction of some
probability density requires extensive simulations and, due to the CPU
time limitations, can only be performed for a~limited number of
parameter settings. This is typical, for example, for ``mass
templates'' which depend on such parameters as the pole mass of the
particle under study, sample background fraction, detector jet energy
scale, {\it etc}. It is often desirable to have the capability to
evaluate such a density for arbitrary parameter values.
It is sometimes possible to address this problem by postulating
an explicit parametric
model and fitting that model to the sets of simulated distributions.
However, complex dependence of the distribution shapes on the parameter
values often leads to infeasibility of this approach. Then it becomes
necessary to interpolate the densities without postulating a concrete
model.
\subsection{Interpolation of Univariate Densities using Quantile Functions}
\label{sec:interpolatebyquant}
As it was shown in~\cite{ref:read}, a general
interpolation of one-dimensional distributions which
leads to very naturally looking results
can be achieved by
interpolating the {\it quantile function} (defined as the
inverse of the cumulative distribution function). While study~\cite{ref:read}
considers linear interpolation in one-dimensional parameter space, it is obvious that
similar weighted average interpolation can be easily constructed in
multivariate parameter settings and with higher order interpolation
schemes. In general, the interpolated quantile function for
an~arbitrary parameter value ${\bf a}$ is expressed by
\begin{equation}
\label{eq:quantileinterp1d}
q(y|{\bf a}) = \sum_{j=1}^m w_j q(y|{\bf a}_j),
\end{equation}
where the weighted quantile functions are summed at the $m$ ``nearby''
parameter settings ${\bf a}_j$ for which the distribution was
explicitly constructed. The weights $w_j$ are normalized by
$\sum_{j=1}^m w_j = 1$. Their precise values depend on the location
of ${\bf a}$ w.r.t. nearby ${\bf a}_j$ and on the interpolation scheme used.
Simple high-order interpolation schemes can be constructed
in one-dimensional parameter space by using Lagrange interpolating
polynomials
to determine the weights\footnote{To determine interpolated value of a function using
Lagrange interpolating polynomials, weights are assigned to function
values at a set of nearby points according to a rule
published by Lagrange in 1795. The weights depend only on the
abscissae of the interpolated points and on the coordinate
at which the function value is evaluated but not on the
interpolated function values.}. In $r$-dimensional parameter
space, rectangular grids can be utilized with multilinear or
multicubic interpolation. For
example, in the case of multilinear
interpolation, the weights can be calculated as follows:
\begin{thinlist}
\item Find the hyperrectangular parameter grid
cell inside which the value of ${\bf a}$ falls.
\item Shift and scale this cell so that it becomes a hypercube
with diagonal vertices at $(0, 0, ..., 0)$ and $(1, 1, ..., 1)$.
\item Let's designate the shifted and scaled value of ${\bf a}$ by ${\bf z}$,
with components $(z_1, z_2, ..., z_r)$.
If we were to interpolate towards ${\bf z}$
a scalar function $f$ defined at the
vertices with coordinates $(c_1, c_2, \ldots, c_r)$,
where all $c_k$ values are now either 0 or 1,
then we would use the formula
\begin{equation}
\label{eq:multilinear}
f(z_1, z_2, ..., z_r) = \sum_{\substack{
c_1 \in \,\{0, 1\}\\
c_2 \in \,\{0, 1\}\\
\cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \\
c_r \in \,\{0, 1\}\\
}} f(c_1, c_2, ..., c_r) \prod_{k=1}^r z_k^{c_k} (1 - z_k)^{1 - c_k}
\end{equation}
which is obviously linear in every $z_k$ and has
correct function values when evaluated at the vertices.
In complete analogy, we define the weights for the quantile functions
constructed at the vertices with coordinates $(c_1, c_2, \ldots, c_r)$ to be
$w(c_1, c_2, \ldots, c_r) = \prod_{k=1}^r z_k^{c_k} (1 - z_k)^{1 - c_k}$.
Naturally, there are $m = 2^r$ weights total.
\end{thinlist}
The quantile interpolation of univariate distributions is implemented
in the NPStat package with the \cname{InterpolatedDistribution1D} class.
A collection of quantile functions is assembled incrementally together
with their weights (formula for calculating the weights is to be supplied
by the user).
For calculating the interpolated density at point $x$,
the equation $x = q(y|{\bf a})$,
with $q(y|{\bf a})$ from~\ref{eq:quantileinterp1d},
is solved numerically for $y$. The density is then evaluated by
numerically differentiating the interpolated quantile function:
$p(x|{\bf a}) = \left( \frac{\partial q(y|{\bf a})}{\partial y} \right)^{-1}$.
\subsection{Interpolation of Multivariate Densities}
\label{sec:interpolatemultivar}
The procedure described in the previous section can be
generalized to interpolate multivariate distributions.
Let $p({\bf x}|{\bf a})$ be a multivariate probability density
in a $d$-dimensional space $X$ of random variables ${\bf x} \in X$.
${\bf a}$ is a vector of parameters, and
$\int_{X} p({\bf x}|{\bf a}) d {\bf x} = 1$
for every ${\bf a}$\footnote{Compared to interpolation of
arbitrary functions, preserving this normalization is one of the
major complications in interpolating multivariate densities.}.
For a~``well-behaved'' density (Riemann-integrable, {\it etc}),
it is always possible to construct a one-to-one
mapping from the space $X$
into the unit $d$-dimensional cube, $U_d$, using a sequence of one-dimensional
conditional cumulative distribution functions. These functions are defined
as follows:
$$
\begin{array}{c}
F_{1}(x_1|x_2, x_3, \ldots, x_d, {\bf a}) \equiv \int_{-\infty}^{x_1} p(z_1, x_2, x_3, \ldots, x_d | {\bf a}) d z_1 /
\int_{-\infty}^{\infty} p(z_1, x_2, x_3, \ldots, x_d | {\bf a}) d z_1
\\
F_{2}(x_2 | x_3, \ldots, x_d, {\bf a}) \equiv \int_{-\infty}^{x_2} F_{1}(\infty | z_2, x_3, \ldots, x_d, {\bf a}) d z_2 /
\int_{-\infty}^{\infty} F_{1}(\infty | z_2, x_3, \ldots, x_d, {\bf a}) d z_2
\\
F_{3}(x_3 | \ldots, x_d, {\bf a}) \equiv \int_{-\infty}^{x_3} F_{2}(\infty | z_3, \ldots, x_d, {\bf a}) d z_3 /
\int_{-\infty}^{\infty} F_{2}(\infty | z_3, \ldots, x_d, {\bf a}) d z_3
\\
.\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .
\\
F_{d}(x_d | {\bf a}) \equiv \int_{-\infty}^{x_d} F_{d-1}(\infty | z_d, {\bf a}) d z_d /
\int_{-\infty}^{\infty} F_{d-1}(\infty | z_d, {\bf a}) d z_d
\end{array}
$$
Naturally, $F_{k}(\infty | x_{k+1}, \ldots, x_d, {\bf a})$ is just renormalized $p({\bf x}|{\bf a})$
in which the first $k$ components of ${\bf x}$ are integrated out ({\it i.e.,} marginalized).
The mapping from ${\bf x} \in X$ into ${\bf y} \in U_d$ is defined by $y_i = F_i(x_i | \ldots), \ i = 1, \ldots, d$. In terms of conditional cumulative distribution functions,
$$
p({\bf x}|{\bf a}) = \prod_{i=1}^{d}{\frac{\partial F_i(x_i | \ldots)}{\partial x_i}}.
$$
This method is not unique and other mappings
from $X$ to $U_d$ are possible. What makes this particular construction useful
is that the inverse mapping, from $U_d$ into $X$, can be
easily constructed as well: we simply solve the equations
\begin{equation}
\label{eq:invertcondcdf}
y_i = F_i(x_i | \ldots)
\end{equation}
in the reverse order, starting from dimension $d$ and going back
to dimension $1$. Each equation in this sequence {\it has only one unknown}
and therefore it can be efficiently solved numerically
(and sometimes algebraically) by a variety
of standard root finding techniques. The solutions of these equations,
$x_i = q_i(y_i | x_{i+1}, \ldots, x_d, {\bf a}) \equiv F_i^{-1}(y_i | \ldots)$, are
the {\it conditional quantile functions} (CQFs). Note that
\begin{equation}
\label{eq:invderiv}
p({\bf x}|{\bf a}) = \left( \prod_{\ i=1}^{d}{\frac{\partial q_i(y_i | \ldots)}{\partial y_i}} \right)^{-1}.
\end{equation}
Now, if the CQFs are known for some parameter values
${\bf a}_1$ and ${\bf a}_2$,
interpolation towards ${\bf a} = (1 - \lambda) {\bf a}_1 + \lambda {\bf a}_2$ is made
in the same manner as described in Section~\ref{sec:interpolatebyquant}:
$x_i = (1 - \lambda) q_i(y_i | x_{i+1}, \ldots, x_d, {\bf a}_1) + \lambda q_i(y_i | x_{i+1}, \ldots, x_d, {\bf a}_2)$.
If the CQFs
are known on a~grid in the parameter space, we have to use
an appropriate interpolation technique (multilinear, multicubic, {\it etc})
in that space in order to assign
the weights to the CQFs at the nearby grid points. In general,
the interpolated CQFs are defined by
a weighted average
\begin{equation}
\label{eq:interpq}
q_i(y_i | x_{i+1}, \ldots, x_d, {\bf a}) = \sum_{j=1}^m w_j q_i(y_i | x_{i+1}, \ldots, x_d, {\bf a}_j),
\end{equation}
where the sum is performed over $m$ nearby parameter points,
weights $w_j$ are normalized by $\sum_{j=1}^m w_j = 1$, and their
exact values depend on the parameter grid chosen and the interpolation method
used.
Basically, it is the whole mapping from ${\bf y}$ into ${\bf x}$ which gets
interpolated in this method.
The CQF interpolation results look
very natural, but the process is rather CPU-intensive:
for each ${\bf x}$ we need to solve $d$ one-dimensional nonlinear equations
\begin{equation}
\label{eq:interpsolve}
x_i = q_i(y_i | x_{i+1}, \ldots, x_d, {\bf a}), \ \ \ \ i = d, \ldots, 1
\end{equation}
in order to determine ${\bf y}$ of the interpolated mapping.
In this process, each call to evaluate $q_i(y_i | x_{i+1}, \ldots, x_d, {\bf a})$
triggers $m$ calls to evaluate $q_i(y_i | x_{i+1}, \ldots, x_d, {\bf a}_j)$.
Depending on implementation details, each of these $m$ calls
may in turn trigger root finding in an equation
like~\ref{eq:invertcondcdf}.
Once ${\bf y}$ is found for the interpolated CQFs,
density is determined by numerical
evaluation of~\ref{eq:invderiv}
in which $q_i(y_i | \ldots)$ are given by~\ref{eq:interpq}.
In practice, finite steps of the parameter grids will cause
dependence of the interpolation results on the order in which
conditional quantiles are evaluated. If choosing some
particular order is considered undesirable and increased CPU
loads are acceptable, all $d!$ possible permutations
should be averaged.
In the NPStat package, multivariate densities whose mapping
from the multidimensional unit cube into the density support
region is implemented via CQFs return ``true'' when
their virtual function \cname{mappedByQuantiles} is called.
User-developed implementations of
multivariate densities should follow this convention as well.
In particular, mapping of
the densities represented by the \cname{BinnedDensityND} class
(lookup tables on hyperrectangular
grids with multilinear interpolation) is performed by CQFs,
as well as mapping of all fully factorizable densities.
When the fidelity of the model is less critical or when the
correlation structure of the distribution is more stable w.r.t.
parameter changes than its location and scales, much faster
multivariate density interpolation can be performed by decomposing
the density into the copula and the marginals. The support
of the copula density $c({\bf y}({\bf x})|{\bf a})$ defined
in Eq.~\ref{eq:copuladef}
is always $U_d$ and it does not depend on the
parameter ${\bf a}$. This suggests that the marginals and the copula can
be interpolated separately, using quantile function interpolation
for the marginals and the standard weighted average interpolation
for the copula.
The NPStat package uses the same data structure to perform both
CQF-based and copula-based interpolation of multivariate densities.
Multilinear interpolation is supported on a rectangular parameter
grid (not necessarily equidistant).
The distributions at the grid points are collected together
using the \cname{GridInterpolatedDistribution} template,
while method-specific
differences are isolated inside a policy class which serves as
the \cname{GridInterpolatedDistribution} template parameter.
Currently, \cname{UnitMapInterpolationND} or
\cname{CopulaInterpolationND} classes can serve as such a~parameter
which results in CQF-based or copula-based interpolation, respectively.
\section{Nonparametric Density Estimation}
\label{sec:densityestimation}
The problem of estimating population probability density function
from a finite sample drawn from this population is ubiquitous
in data analysis practice. It is often the case that
there are no substantial reasons for choosing a particular parametric
density model but certain assumptions,
such as continuity
of the density together with some number of derivatives or absence
of narrow spatial features, can still be justified. There is
a~number of approaches by which assumptions of this kind can be introduced
into the statistical model. These approaches are collectively known
as ``nonparametric
density estimation'' methods~{\cite{ref:silverman, ref:izenmann, ref:kde}}. The NPStat
package provides an efficient implementation of one of
such methods, kernel density estimation (KDE), together with its extension
to polynomial density models and densities with bounded support.
This extension is called ``local orthogonal polynomial expansion'' (LOrPE).
\subsection{Kernel Density Estimation (KDE)}
\label{sec:kde}
Suppose, we have an i.i.d.~sample of measurements $x_i$, $i = 0, 1,
..., N-1$ from a univariate probability density $p(x)$. The empirical
probability density function (EPDF) for this sample is defined by
\begin{equation}
\mbox{EPDF}(x) = \frac{1}{N} \sum_{i=0}^{N-1} \delta (x - x_i),
\label{eq:edf}
\end{equation}
where $\delta(x)$ is the Dirac delta function. $\mbox{EPDF}(x)$
can itself be considered an estimate of $p(x)$. However, this
estimate can be substantially improved if some additional information
about $p(x)$ is available. For example, we can often assume that
$p(x)$ is continuous together with its first few derivatives or
that it can have at most a few modes. In such cases a convolution
of $\mbox{EPDF}(x)$ with a kernel function, $K(x)$, often provides
a much better estimate of the population density. $K(x)$ itself is
usually chosen to be a symmetric continuous density with a location
and scale parameter, so that the resulting estimate looks
like
\begin{equation}
\hat{p}\sub{KDE}(x|h) = \frac{1}{N h} \sum_{i=0}^{N-1} K \left(\frac{x - x_i}{h}\right).
\label{eq:kde}
\end{equation}
In the context of
density estimation, parameter $h$ is usually referred to as
``bandwidth''.
Use of the Gaussian distribution or one of the distributions from the
symmetric beta family as $K(x)$ is very common. In fact, it is so common
that beta family kernels have their own names in the density estimation
literature. These names are listed in Table~\ref{table:betakernels}.
\begin{table}[h!]
\caption{One-dimensional kernels from the symmetric beta family.
All these kernels look like
${\cal{N}} \, (1 - x^2)^p \, I(-1 < x < 1)$, where ${\cal{N}}$
is the appropriate normalization factor. In the
NPStat package, their formulae are implemented by the
\cname{SymmetricBeta1D} function.}
\label{table:betakernels}
\begin{center}
\noindent\begin{tabular}{|c|c|} \hline
Kernel name & Power $p$ \\ \hline\hline
Uniform (also called ``boxcar'') & 0 \\ \hline
Epanechnikov & 1 \\ \hline
Biweight (also called ``quartic'') & 2 \\ \hline
Triweight & 3 \\ \hline
Quadweight & 4 \\ \hline
% Quintweight & 5 \\ \hline
\end{tabular}
\end{center}
\end{table}
In the limit $N \rightarrow \infty$ and with proper choice of $h$ so that
$h \rightarrow 0$ and $N h \rightarrow \infty$,
$\hat{p}\sub{KDE}(x)$ becomes a~consistent
estimate of $p(x)$ in terms of integrated squared error (ISE):
\begin{equation}
\mbox{ISE}(h) = \int_{-\infty}^{\infty} (\hat{p}\sub{KDE}(x|h) - p(x))^2 dx, \ \ \ \ \ \lim_{N \rightarrow \infty} \mbox{ISE}(h) = 0.
\end{equation}
For the method analysis purposes, it is useful to understand
what happens when we reconstruct known densities $p(x)$.
Here, another measure of distance between the true
density and its estimator becomes indispensable,
called ``mean integrated squared error'' (MISE):
\begin{equation}
\mbox{MISE}(h) = E(\mbox{ISE}(h)) = E \left( \int_{-\infty}^{\infty} (\hat{p}\sub{KDE}(x|h) - p(x))^2 dx \right),
\label{eq:mise}
\end{equation}
where $E(...)$ stands for the expectation value over samples of $N$ points
drawn from $p(x)$.
While other distance measures can be defined
(and may be more relevant for your problem),
$\mbox{MISE}(h)$ is usually
the easiest to analyze
mathematically\footnote{Note that both ISE and MISE are not dimensionless and,
therefore, not invariant under scaling transformations.
It is OK to compare different $\hat{p}\sub{KDE}(x|h)$ with each other if
the underlying $p(x)$ is the same, but ISE or MISE comparison for different $p(x)$
requires certain care if meaningful results are to be obtained.}.
A typical goal of such an analysis consists in
finding the value of $h$ which minimizes $\mbox{MISE}(h)$ for
a~sample of given size, and a~significant amount of effort
has been devoted to such bandwidth optimization studies
(see, {\it e.g.,} Refs.~\cite{ref:bwopt, ref:loaderbw} for a review).
A large fraction of these studies employs a~simple MISE approximation valid for large
values of $N$ known as ``AMISE'' (asymptotic MISE). This approximation
includes just the two leading terms known as ``bias'' which increases with
increasing bandwidth and ``variance'' which decreases with increasing bandwidth,
so that the bandwidth optimization procedure is reduced to finding
the best bias-variance trade-off.
For subsequent discussion, it will be useful to introduce the concept
of kernel order. Let's define the functional
\begin{equation}
\mu_{j}(f) = \int_{-\infty}^{\infty} x^j f(x) dx
\label{eq:mufunct}
\end{equation}
which is the $j$-th moment of $f(x)$ about 0.
Then it is said that the kernel $K(x)$ is of order $m$ if
\begin{equation}
\mu_0(K) = 1, \ \mu_j(K) = 0 \ \mbox{for} \ j = 1, ..., m-1, \ \mbox{and} \ \mu_m(K) \ne 0.
\end{equation}
It can be shown that, for one-dimensional
KDE, the rate of AMISE convergence to 0 is proportional
to $N^{-\frac{2 m}{2 m + 1}}$ (see, for example, section~2.8
of~\cite{ref:kernsmooth}). Therefore, kernels with high values
of $m$ should be preferred for large samples. At the same time,
only in case $m = 2$ the kernels can be everywhere non-negative
({\it i.e.,} bona fide densities).
When $m > 2$ (so called ``high-order'' kernels), in order to have
$\mu_2(K) = 0$, $K(x)$ must become negative somewhere.
Therefore, negative values of $\hat{p}\sub{KDE}(x|h)$ also
become possible, and a mechanism for dealing with this problem
must be specified. In NPStat, this problem is normally taken care
of by setting negative values of $\hat{p}\sub{KDE}(x|h)$ to 0
with subsequent renormalization of the density estimate so that its
integral is 1.
It is instructive to consider $p(x)$, $\mbox{EPDF}(x)$,
and $\hat{p}\sub{KDE}(x|h)$ in the frequency
domain\footnote{The Fourier transform of a
probability density is called ``characteristic function'' of the distribution.}. There, the Fourier
transform of $K(x)$ acts as a low-pass filter which suppresses the sampling
noise present in $\mbox{EPDF}(x)$ so that the result, $\hat{p}\sub{KDE}(x|h)$,
has a better match
to the spectrum of $p(x)$. By Parseval's identity, this
leads to the reduction of the ISE. High-order kernels allow
for a sharper frequency cutoff in the filter. Fortunately, the precise shape
of $p(x)$ frequency spectrum
is relatively
unimportant, it is only important that its high frequency components
decay ``fast enough'' so that their suppression together with the
noise does not cause a significant distortion of $\hat{p}\sub{KDE}(x|h)$
in comparison with $p(x)$. Powerful automatic bandwidth selection rules
can be derived from this type of analysis if certain
assumptions about the $p(x)$ spectrum decay rate
at high frequencies are satisfied~\cite{ref:chiu2, ref:chiumultivar}.
With a few modifications, the ideas described above can be translated
to multivariate settings. The kernel function becomes a multivariate
density, and the bandwidth parameter becomes, in general, the bandwidth
matrix (for example, the covariance matrix in the case of multivariate
Gaussian kernel).
The NPStat package calculates $\hat{p}\sub{KDE}(x|h)$ on an equidistant
grid in one or more dimensions.
Initially, the data sample is histogrammed using a finely
binned histogram and then convoluted with a kernel using
Discrete Fast Fourier Transform (DFFT).
The number of histogram bins, $N_b$, should be selected
taking into account the following considerations:
\begin{thinlist}
\item The bins should be sufficiently small so that no ``interesting''
detail will be missed due to discretization of the density.
\item The bins should be sufficiently small so that the expected optimal
MISE is significantly larger than the ISE due to
density discretization. Detailed exposition of this
requirement can be found in Ref.~\cite{ref:discretizedkde}.
\item DFFT should be efficient for this number of bins. It is best to use
$N_b = 2^k$ bins in each dimension, where $k$ is a positive integer.
\end{thinlist}
The computational complexity of this method is
${\cal O}(N) + {\cal O}(N_b \ln N_b)$ which
is usually much better than the ${\cal O}(N \times N_b)$ complexity of
a ``naive'' KDE implementation. However, for large sample
dimensionalities $N_b$ can become very large which limits the
applicability of this technique. There are other computational
methods (not yet in NPStat) which can work efficiently for
high-dimensional samples~\cite{ref:fastkde1, ref:fastkde2}.
The following NPStat functions and classes can be used to perform
KDE and to assist in bandwidth selection:
\cname{amiseOptimalBwGauss} (header file ``npstat/stat/amiseOptimalBandwidth.hh'') --- calculates AMISE-optimal bandwidth for
fixed-bandwidth KDE with Gaussian kernel as well as high-order kernels
derived from Gaussian. The following formula is implemented:
\begin{equation}
h\sub{AMISE} = \left( \frac{R(K) (m!)^2}{2 m \mu_{m}^2(K) R(p^{(m)}) N} \right)^{\frac{1}{2 m + 1}}.
\label{eq:amisebw}
\end{equation}
In this formula, $R(f)$ denotes the functional
$R(f) = \int_{-\infty}^{\infty} f^2(x) dx$ defined
for any square-integrable function $f$, $\mu_{m}(f)$
is the functional defined in Eq.~\ref{eq:mufunct},
$K$ is the kernel,
$m$ is the kernel order, $p^{(m)}$ is the $m$-th derivative of the
reconstructed density\footnote{Of course, in
practice the reconstructed density is usually unknown.
It is expected that a~look-alike density will be substituted.
}, and $N$ is the number of points in the sample.
The expected AMISE corresponding to this bandwidth is calculated from
\begin{equation}
\mbox{AMISE}(h\sub{AMISE}) = \frac{(2 m + 1) R(K)}{2 m \, h\sub{AMISE} \,N}.
\label{eq:bestamise}
\end{equation}
For the origin and further discussion
of these formulae consult, for example, section~2.8 in
Ref.~\cite{ref:kernsmooth}. It is assumed that high-order kernels are
generated according to Eq.~\ref{eq:effk}.
\cname{amiseOptimalBwSymbeta} (header file ``npstat/stat/amiseOptimalBandwidth.hh'')
--- calculates AMISE-optimal bandwidth
and corresponding expected AMISE according to
Eqs.~\ref{eq:amisebw} and~\ref{eq:bestamise}
for fixed-bandwidth KDE with kernels from symmetric beta family as well
as with high-order kernels derived from symmetric beta distributions.
\cname{amisePluginBwGauss} (header file ``npstat/stat/amiseOptimalBandwidth.hh'')
--- Gaussian $p(x)$ is substituted
in Eqs.~\ref{eq:amisebw} and~\ref{eq:bestamise} and the corresponding
quantities are found for the Gaussian kernel as well as high-order kernels
derived from Gaussian.
\cname{amisePluginBwSymbeta} (header file ``npstat/stat/amiseOptimalBandwidth.hh'')
--- Gaussian $p(x)$ is substituted
in Eqs.~\ref{eq:amisebw} and~\ref{eq:bestamise} and the corresponding
quantities are found for the kernels from the symmetric beta family as well
as high-order kernels derived from symmetric beta distributions.
\cname{miseOptimalBw} and \cname{gaussianMISE} methods of the
\cname{GaussianMixture1D} class --- these methods calculate
MISE-optimal bandwidth and the corresponding exact MISE for
Gaussian mixture densities estimated with the Gaussian kernel
(or high-order kernels derived from the Gaussian) according
to the formulae from Ref.~\cite{ref:bwgaussmix}.
\cname{ConstantBandwidthSmoother1D} --- This class implements fixed-bandwidth
KDE for one-dimensional samples of points using Gaussian kernels
or kernels from the symmetric beta family (including high-order
kernels which are generated internally). Boundary effects can be
alleviated by data mirroring. Kernel convolutions are performed
by DFFT after sample discretization.
\cname{ConstantBandwidthSmootherND} --- This class implements fixed-bandwidth
KDE for multivariate histograms. Arbitrary density implementations
which inherit from \cname{AbsDistributionND} can be used as weight
functions for generating high-order kernels. Boundary effects can be
alleviated by data mirroring.
\cname{JohnsonKDESmoother} --- This class constructs adaptive bandwidth KDE estimates for
one-dimensional samples. It operates in several steps:
\begin{thinlist}
\item The sample is discretized using centroid-preserving binning.
\item Population mean, $\mu$, standard deviation, $\sigma$,
skewness, $s$, and kurtosis, $k$, are
estimated by the \cname{arrayShape1D} function.
\item A distribution from the Johnson system with these
values of $\mu$, $\sigma$, $s$, and $k$ is used as a template
for the \cname{amiseOptimalBwSymbeta} (or \cname{amiseOptimalBwGauss})
function which calculates optimal constant bandwidth
according to Eq.~\ref{eq:amisebw}.
\item A pilot density estimate is built by KDE
(using \cname{LocalPolyFilter1D} class) with the bandwidth determined
in the previous step.
\item The sample is smoothed by the \cname{variableBandwidthSmooth1D}
function (described below) using this pilot estimate.
\end{thinlist}
This method achieves better MISE
convergence rate without using high-order kernels (so that density
truncation below zero is unnecessary). Give this method a serious
consideration if you are working with one-dimensional samples,
expect to reconstruct a unimodal distribution, and do not have to worry
about boundary effects.
\cname{KDECopulaSmoother} --- multivariate KDE in which extra care
is applied in order to ensure that the estimation result
is a bona fide copula ({\it i.e.,} that all of its marginals
are uniform).
This class should normally be used to smooth empirical copula densities
constructed by \cname{empiricalCopulaHisto} or
\cname{empiricalCopulaDensity}.
The bandwidth can be supplied by the user or
it can be chosen by cross-validation. In general,
\cname{LOrPECopulaSmoother} will produce better results
in this context, so \cname{KDECopulaSmoother} should
be used only in case the slower speed of \cname{LOrPECopulaSmoother}
is deemed unacceptable.
\cname{KDEFilterND} --- A collection of \cname{KDEFilterND} objects which utilize
common workspace and DFFT engine can be employed to perform cross-validation
calculations and bandwidth scans. Such a collection is used, for example,
by the \cname{KDECopulaSmoother} class described above.
If you already know the bandwidth,
the \cname{ConstantBandwidthSmootherND} class
will likely be more convenient to use than this one.
\cname{variableBandwidthSmooth1D} --- implements one-dimensional
variable kernel adaptive KDE (which should not be confused
with {\it local kernel}, or {\it balloon}, method --- see section~2.10 in
Ref.~\cite{ref:kernsmooth} for further discussion). In general, this
approach consists in assigning different bandwidth values to each sample
point. In the \cname{variableBandwidthSmooth1D} function, this assignment
is performed according to the formula
\begin{equation}
h_i = \frac{c}{{\hat \rho}^{\alpha}(x_i)},
\end{equation}
where ${\hat \rho(x)}$ is a~pilot density estimate constructed, for example,
by fixed-bandwidth KDE.
Boundary kernel adjustments are performed automatically.
The normalization constant $c$ is
determined so that the geometric mean
of $h_i$ equals to a~user-provided bandwidth. There are
certain reasons to believe that the choice of power
parameter $\alpha = 1/2$ is a~particularly good one~\cite{ref:abramson}.
\subsection{Local Orthogonal Polynomial Expansion (LOrPE)}
\label{sec:lorpe}
Local Orthogonal Polynomial Expansion (LOrPE) can be viewed as
a convenient method
for creating kernels with desired properties (including high-order
kernels) and for eliminating the KDE boundary bias\footnote{If you
are familiar with orthogonal series density estimators (OSDE), you can
also view LOrPE as a~localized version of OSDE.}.
LOrPE amounts to constructing a truncated expansion of the EPDF
defined by Eq.~\ref{eq:edf}
into orthogonal polynomial series near each point $x\sub{fit}$
where we want to build the initial density estimate:
\begin{equation}
\label{eq:expansion}
\hat{f}\sub{LOrPE}(x|h) = \sum_{k=0}^{M}c_{k}(x\sub{fit}, h) P_{k}\left(\frac{x - \xf}{h}\right).
\end{equation}
The polynomials $P_{k}(x)$ are built to satisfy the normalization condition
\begin{equation}\label{eq:norm0}
\frac{1}{h} \int_{a}^{b} P_{j}\left(\frac{x - \xf}{h}\right)P_{k}\left(\frac{x - \xf}{h}\right) K\left(\frac{x - \xf}{h}\right)dx = \delta_{jk},
\end{equation}
which is equivalent to
\begin{equation}
\label{eq:norm}
\int_{(a - x\sub{fit})/h}^{(b - x\sub{fit})/h} P_{j}(y)P_{k}(y)K(y)dy = \delta_{jk},
\end{equation}
where $\delta_{jk}$ is the Kronecker delta,
$K(x)$ is a suitably chosen kernel function,
and $[a, b]$ is the support interval of the estimated density.
For commonly used kernels from
the beta family (Epanechnikov, biweight, triweight, {\it etc}.), condition (\ref{eq:norm})
generates the normalized Gegenbauer polynomials (up to a common multiplicative
constant) at points $\xf$ sufficiently deep inside the support
interval, provided $h$ is small enough to guarantee that
$(a-\xf)/h\leq -1$ and $(b-\xf)/h\geq 1$.
If $\xf$ is
sufficiently close to the boundaries of the density
support $[a,b]$ then the polynomial system will vary
depending on $\xf$, and the notation
$P_{k}(\cdot,\xf)$ would be more appropriate. However, in the subsequent
text this dependence on $\xf$ will be suppressed in order to simplify
the notation.
The expansion coefficients $c_{k}(x\sub{fit}, h)$
are determined by the usual scalar product of the expanded function with
$P_{k}$:
\begin{equation}\label{eq:convol}
c_{k}(x\sub{fit}, h) = \frac{1}{h} \int \mbox{EPDF}(x) P_{k}\left( \frac{x - x\sub{fit}}{h} \right)K\left( \frac{x - x\sub{fit}}{h} \right)dx,
\end{equation}
which, after substituting $\mbox{EPDF}(x)$ from~\ref{eq:edf}, leads to
\begin{equation}\label{eq:emp-ck}
c_{k}(x\sub{fit}, h) = \frac{1}{Nh}\sum_{i=0}^{N-1}P_{k}\left( \frac{x_i - x\sub{fit}}{h} \right) K\left( \frac{x_i - x\sub{fit}}{h} \right).
\end{equation}
Note that the coefficients $c_{k}(x\sub{fit}, h)$ calculated according to
Eq.~\ref{eq:convol} can be naturally
interpreted as localized expectation values of orthogonal polynomials
$P_{k}\left( \frac{x - x\sub{fit}}{h} \right)$ w.r.t. probability
density $\mbox{EPDF}(x)$ in which localization weights are given by
$\frac{1}{h} K\left( \frac{x - x\sub{fit}}{h} \right)$.
The density estimate at $x\sub{fit}$ is defined by
\begin{equation}
\hat{p}\sub{LOrPE}(x\sub{fit}|h) = \max\{0, \hat{f}\sub{LOrPE}(x\sub{fit}|h)\}.
\label{eq:trunc}
\end{equation}
In general, LOrPE does not produce a bona fide density (in this respect it is similar to the
orthogonal series estimator), and after calculating
the density for all $x\sub{fit}$ values one has to perform the overall renormalization.
Equation~\ref{eq:expansion} can be usefully generalized as follows:
\begin{equation}\label{eq:lorpe}
\hat{f}\sub{LOrPE}(x|h) = \sum_{k=0}^{\infty} g(k) c_{k}(x\sub{fit}, h) P_{k}\left( \frac{x - x\sub{fit}}{h} \right).
\end{equation}
Here, $g(k)$ is a ``taper function''. Normally, $g(0) = 1$ and
there is an integer $M$ such that $g(k) = 0$ for any $k > M$. The
taper function suppresses
high order terms gradually instead of using a
sharp cutoff at $M$.
When evaluated at points $x\sub{fit}$
which are sufficiently far away from the density support
boundaries and if $K(x)$ is an even function, Eq.~\ref{eq:lorpe}
is equivalent to a~kernel density estimator with the effective
kernel
\begin{equation}\label{eq:effk}
K\sub{eff}(x) = K(x) \sum_{j=0}^{\infty} g(2 j) P_{2 j}(0) P_{2 j}(x).
\end{equation}
Moreover, if $g(k)$ is a step function,
{\it i.e.,}~$g(k) = 1$ for all $k \le M$ and $g(k) = 0$ for all $k > M$,
it can be shown that the effective kernel is of order $M+1$ if $M$
is odd and $M+2$ if $M$ is
even~\cite{ref:placida}\footnote{For such kernels the sum in Eq.~\ref{eq:effk}
can be
reduced to a simple algebraic form via the Christoffel-Darboux identity.
However, the resulting formula is not easy to evaluate in a numerically
stable manner near $x = 0$.}.
A slightly different modification of LOrPE is based on the following
identity:
\begin{equation}\label{eq:deltasum}
\frac{1}{h} \sum_{j=0}^{\infty} P_{j}\left(\frac{x - x_i}{h}\right)P_{j}\left(0\right) K\left(\frac{x - x_i}{h}\right) = \delta(x - x_i).
\end{equation}
Substituting this into Eq.~\ref{eq:edf}, we obtain
\begin{equation}
\mbox{EPDF}(x) = \frac{1}{N h} \sum_{i=0}^{N-1} \sum_{j=0}^{\infty} P_{j}\left(\frac{x - x_i}{h}\right)P_{j}\left(0\right) K\left(\frac{x - x_i}{h}\right).
\end{equation}
The modified density estimate is obtained from this expansion by introducing a taper:
\begin{equation}
\hat{\hat{f}}\sub{LOrPE}(x\sub{fit}|h) = \frac{1}{N h} \sum_{i=0}^{N-1} \sum_{j=0}^{\infty} g(j) P_{j}\left(\frac{x\sub{fit} - x_i}{h}\right)P_{j}\left(0\right) K\left(\frac{x\sub{fit} - x_i}{h}\right)
\label{eq:lorpe2}
\end{equation}
and then truncation is handled just like in Eq.~\ref{eq:trunc}.
For even kernels and points $x\sub{fit}$ sufficiently far away from the density support
boundaries, Eq.~\ref{eq:lorpe2} is equivalent to Eq.~\ref{eq:lorpe}
evaluated at $x = x\sub{fit}$. However, this is no longer true near the boundaries.
Perhaps, the easiest way to think about it is that, in Eq.~\ref{eq:lorpe2},
an effective kernel is placed at the location of each data point
(and, in general, shapes of these effective kernels
depend on the data point location $x_i$).
All these kernels are then summed to obtain the density estimates
at all $x\sub{fit}$.
On the other hand, in Eq.~\ref{eq:lorpe} the effective kernel is placed
at the location of each point at which the density estimate is made (so
that the effective kernel shape depends on $x\sub{fit}$).
This kernel generates the weights for each data point which are summed to
obtain the estimate. The difference between these two approaches
is usually ignored for KDE
(simple KDE is unable to deal with boundary bias anyway),
but for LOrPE substantially different results can be obtained near
the boundaries.
It is not obvious apriori which density estimate is better:
$\hat{\hat{f}}\sub{LOrPE}$ from Eq.~\ref{eq:lorpe2}
or $\hat{f}\sub{LOrPE}$ from Eq.~\ref{eq:lorpe},
although some preliminary experimentation with simple distributions
does indicate that
$\hat{f}\sub{LOrPE}$ typically results in smaller MISE.
The integral of the $\hat{\hat{f}}\sub{LOrPE}$ estimate
on the complete density support interval is automatically 1
which is not true for $\hat{f}\sub{LOrPE}$. On the other
hand, $\hat{f}\sub{LOrPE}$ admits an appealing interpretation
in terms of the local density expansion~\ref{eq:lorpe} in which
the localized expectation values of the orthogonal polynomials
$P_k$ are matched to their observed values
in the data sample (this also leads to automatic matching
of localized distribution moments about $x\sub{fit}$).
One-dimensional KDE with fixed kernel $K(x)$ has only one important parameter
which regulates the amount of smoothing:
bandwidth $h$. LOrPE has two such parameters: bandwidth $h$ and the highest
polynomial order $M$ (or, in general, the shape of the taper function).
It is intuitively obvious that polynomial modeling
should result in a smaller bias than KDE for densities
with several (at least $M$) continuous
derivatives, and that a~proper balance of $h$ and $M$ should
result in a better estimator overall.
LOrPE calculations remain essentially unchanged
in multivariate settings: the only difference is switching to multivariate
orthogonal polynomial systems.
Even though LOrPE is equivalent to KDE far away from the density support
boundaries, LOrPE does not suffer from the boundary bias because
Eq.~\ref{eq:norm} automatically adjusts the shape of
orthogonal polynomials near the boundary. This makes LOrPE
applicable in a~wider set of problems than KDE.
In addition to just making better estimates of densities with
a~sharp cutoff at the boundary, LOrPE fixes the main
problem with some existing KDE-based methods which are rarely
used in practice due to their severe degradation from
the boundary bias. Examples of such methods include transformation
kernel density estimation
(see, for example, section 2.10.3 of~\cite{ref:kernsmooth})
in which the transformation target is the uniform distribution,
as well as separate estimation
of the marginals and the copula for multivariate densities.
Unfortunately, LOrPE improvements over KDE do not come without
a price in terms of the algorithmic complexity of the method.
The density estimate can no longer be represented as a simple
convolution of the sample EPDF and a kernel. Because of this,
DFFT-based calculations are no longer sufficient.
In the LOrPE implementation within NPStat, simple algorithms
are used instead which perform pointwise convolutions. Their
computational complexity scales as ${\cal O}(N) + {\cal O}(N_b N_s)$, where
$N_b$ is the number of bins in the sample discretization
histogram and $N_s$ is the number of bins of the same length
(area, volume, {\it etc}) inside the kernel support. For
large bandwidth values (or for kernels with infinite support)
this essentially becomes ${\cal O}(N) + {\cal O}(N_b^2)$ which
can be significantly slower than the KDE implementation based on DFFT.
The following NPStat classes can be used to perform LOrPE:
\cname{LocalPolyFilter1D} --- LOrPE for one-dimensional samples.
A numerical Gram-Schmidt procedure is used
to build polynomials defined by Eq.~\ref{eq:norm0}
on an equidistant grid. A linear filter which combines
formulae~\ref{eq:convol} and~\ref{eq:lorpe} is then
constructed for each grid point $\xf$ from the density
support region (the same ``central'' filter is used for
all $\xf$ points far away from the support boundaries).
The \cname{filter} method of the class can then be used to
build density estimates defined by Eq.~\ref{eq:lorpe} with $x = \xf$
from sample histograms. Alternatively, the \cname{convolve} method
can be used to to make estimates according to Eq.~\ref{eq:lorpe2}.
If necessary, subsequent truncation of the reconstructed
densities below 0 together with renormalization should be performed
by the user.
+\cname{WeightTableFilter1DBuilder}
+ (header file ``npstat/stat/Filter1DBuilders.hh'') ---
+ Helper class designed to work with \cname{LocalPolyFilter1D}.
+ This helper constructs linear filters out of arbitrary scanned weights
+ utilizing orthogonal polynomials.
+
+\cname{StretchingFilter1DBuilder}
+ (header file ``npstat/stat/Filter1DBuilders.hh'') ---
+ Another helper class for \cname{LocalPolyFilter1D}. Can be used to
+ build filters with increased bandwidth (and thus decreased variance)
+ at the density support boundaries.
+
\cname{PolyFilterCollection1D} --- A collection of \cname{LocalPolyFilter1D}
objects which can be used, for example, in bandwidth scans or in
cross-validation calculations.
\cname{LocalPolyFilterND} --- similar to \cname{LocalPolyFilter1D} but
intended for estimating multivariate densities.
\cname{SequentialPolyFilterND} --- similar to \cname{LocalPolyFilterND}
but each dimension is processed sequentially using 1-d filtering.
Employs a collection of \cname{LocalPolyFilter1D} objects, one for
each dataset dimension.
\cname{LOrPECopulaSmoother} --- multivariate LOrPE in which extra care
is applied in order to ensure that the estimation result
is a bona fide copula ({\it i.e.,} that all of its marginals
are uniform).
This class should normally be used to smooth empirical copula densities
constructed by \cname{empiricalCopulaHisto} or
\cname{empiricalCopulaDensity}.
The bandwidth can be supplied by the user or
it can be chosen by cross-validation. Less reliable but faster
calculations of this type can be performed with the
\cname{KDECopulaSmoother} class described in Section~\ref{sec:kde}.
\cname{SequentialCopulaSmoother} --- similar to \cname{LOrPECopulaSmoother}
but each dimension is processed sequentially using 1-d filtering.
\cname{NonparametricCompositeBuilder} --- a high-level API for estimating
multivariate densities by applying KDE or LOrPE separately to
each marginal and to the copula. This class builds
\cname{CompositeDistributionND} objects from collections of sample points.
\subsection {Density Estimation with Bernstein Polynomials}
\label{sec:bernstein}
Density representation by Bernstein polynomial series
is an alternative
-approach which can be used to eliminate KDE boundary bias.
+approach which can be used to alleviate the boundary bias problem of KDE.
Bernstein polynomials are defined as follows:
\begin{equation}
b_{m,n}(x) = C_n^m x^m (1 - x)^{n - m},
\label{eq:bpoly}
\end{equation}
where $m = 0, 1, ..., n$ and $C_n^m$ are the binomial coefficients:
\begin{equation}
C_n^m = \frac{n!}{m! (n-m)!}.
\end{equation}
In the density estimation context, Bernstein polynomials are often
generalized to non-integer values of $n$ and $m$ in which case
$C_n^m$ is replaced by
$\frac{\Gamma(n + 1)}{\Gamma(m + 1) \Gamma(n - m +1)}$. Up to
normalization, this representation is equivalent to the
beta distribution with parameters $\alpha = m + 1$ and $\beta = n - m + 1$.
-I will use the term ``Bernstein polynomials'' even if
-$n$ and $m$ are not integer, as this generalization usually
-does not affect the underlying ideas.
+For notational simplicity, I will use the term ``Bernstein polynomials''
+even if $n$ and $m$ are not integer.
-There are two variations of this density estimation
-method. In the first variation~\cite{ref:betakern},
+There are two substantially different variations of this density estimation
+method. In the first scheme~\cite{ref:betakern},
Bernstein polynomials are used
as variable-shape kernels in a KDE-like formula:
\begin{equation}
\hat{f}\sub{B}(x|n) = \frac{n + 1}{N} \sum_{i=0}^{N-1} b_{m(x), n}(x_i),
+\label{eq:betakernels}
\end{equation}
where it is assumed
that the reconstructed density is supported on the [0, 1] interval
(naturally, this interval can be shifted and scaled as needed).
-$m(x)$ can be chosen as $m(x) = n x$. $n$ plays the role
-of the bandwidth parameter: all Bernstein polynomials become narrower
-as $n$ increases. As the number of points in the sample
-$N \rightarrow \infty$, $n$ should also tend to $\infty$
-in order to obtain a consistent estimate.
+The requirement of asymptotic estimator consistency does not
+fix $m(x)$ uniquely, and this mapping can be chosen in a~variety
+of ways. In NPStat, the following relationship is implemented:
+\begin{equation}
+m(x) = \left\{ \begin{array}{ll}
+ c & \mbox{ if } x (n - 2 s) + s \leq c \\
+ x (n - 2 s) + s & \mbox{ if } c < x (n - 2 s) + s < n - c \\
+ n - c & \mbox{ if } x (n - 2 s) + s \geq n - c
+ \end{array} \right.
+\end{equation}
+This formula reproduces the simple mapping $m(x) = x n$ considered in
+Ref.~\cite{ref:betakern} in case $s = 0$ and $c = 0$. The offset parameter
+$s$ plays the role of effective Bernstein polynomial degree used
+at $x = 0$ and regulates the amount of boundary bias. Meaningful
+values of $s$ lie between -1 and 0. As $m \rightarrow -1$, the mean
+of the generalized Bernstein polynomial kernels tends to $x$ so that
+the estimator
+becomes uniformly unbiased for linear density functions.
+At the same time, the width of
+the kernel tends to 0 at the boundary which leads to an increase in the estimator
+variance. As it makes little sense to use kernels whose width
+is smaller than the discretization bin size, the cutoff parameter $c$
+was introduced. This cutoff effectively limits kernel width from below
+in a manner which preserves asymptotic consistency of the estimator
+(the useful range of $c$ values is also [-1, 0]).
+In addition to appropriate selection of the main
+bandwidth parameter $n$, proper choice of parameters $s$ and $c$
+can significantly improve estimator convergence at the boundary.
-In the second variation~\cite{ref:babu}, the polynomials are chosen
+In the second variation of this density estimation technique~\cite{ref:babu},
+the polynomials are chosen
based on the location of the observed points:
\begin{equation}
\hat{\hat{f}}\sub{B}(x|n) = \frac{n + 1}{N} \sum_{i=0}^{N-1} b_{m(x_i), n}(x).
+\label{eq:bern2}
\end{equation}
For any $m$ and $n$,
$\int_0^1 b_{m,n}(x) dx = \frac{1}{n + 1}$, so this particular estimate
-is a bona fide density. As mentioned later in this subsection, it can be
-advantageous to keep integer $m$ and $n$ in this approach. There is then
-some amount of freedom in the $m(x_i)$ assignments: they must be chosen
-so that $m(x_i)/n \rightarrow x_i$ as $n \rightarrow \infty$ but such
+is a bona fide density. Due to the reasons that will be mentioned later
+in this subsection,
+it can be advantageous to keep integer $m$ and $n$ in this approach.
+As in the case of $x$-dependent kernel shape, there is
+some amount of freedom in the $m(x_i)$ assignment. For asymptotic
+consistency we must require that $m(x_i)/n \rightarrow x_i$ as
+$n \rightarrow \infty$. However, such
assignments are not unique. One can choose, for example,
-$m(x_i) = \lfloor x_i/(n + 1) \rfloor$ as in Ref.~\cite{ref:babu},
-but it can also be useful to assign more than one value of $m$
-to $x_i$.
+$m(x_i) = \lfloor x_i/(n + 1) \rfloor$ as in Ref.~\cite{ref:babu}
+(the symbol $\lfloor \cdot \rfloor$ stands for the ``floor''
+function), but it can also be useful to assign more than one polynomial
+to $x_i$. The following scheme is implemented in NPStat in addition to
+the $m(x_i)$ assignment just mentioned. First, an
+integer $k$ is found such that $k + 0.5 \leq x_i (n + 1) < k + 1.5$.
+Then, if $0 \leq k < n$,
+\begin{equation}
+m(x_i) = \left\{ \begin{array}{ll}
+ k & \mbox{ with weight } k + 1.5 - x_i (n + 1) \\
+ k + 1 & \mbox{ with weight } x_i (n + 1) - k - 0.5
+ \end{array} \right.
+\end{equation}
+If $k < 0$ then $m(x_i) = 0$ with weight 1, and if $k >= n$ then $m(x_i) = n$
+with weight~1.
+The polynomial weighting schemes actually implemented in the code are
+slightly more complicated as they take into account data binning.
+
+With integer values of $m$ and $n$, density estimates constructed according
+to Eq.~\ref{eq:bern2} (or its weighted version just described) have an important
+property of being positive doubly stochastic. What it means is that not only
+$\int_0^1 \hat{\hat{f}}\sub{B}(x|n) dx = 1$
+but also a sum of an arbitrary number of separate $\hat{\hat{f}}\sub{B}(x|n)$
+estimators will be flat in $x$
+as long as the sum of $x_i$ values used to build all these estimators
+is itself flat. If the $x_i$ values are flat
+between 0 and 1 then assigned $m$ values will be flat
+between 0 and n (inclusive). Then double stochasticity
+follows directly from the partition
+of unity property of Bernstein polynomials: $\sum_{m=0}^n b_{m,n}(x) = \sum_{m=0}^n C_n^m x^m (1 - x)^{n - m} = [x + (1 - x)]^n = 1$.
+
+A collection of positive double stochastic estimators can be used for copula
+filtering by sequentially applying these estimators in each dimension.
+If the initial data set processed by this sequence represents a copula density
+(for example, in case it is an empirical copula density histogram), the result
+is also guaranteed to be a copula density.
+
+In NPStat, any density estimator
+implemented via the \cname{LocalPolyFilter1D} class
+can be turned into closest (in some sense) doubly stochastic estimator by
+calling the \cname{doublyStochasticFilter} method of that class.
+Non-negative estimators will be converted into non-negative
+doubly stochastic estimators
+using an iterative procedure similar to the one described in~\cite{ref:sinkhorn}
+while filters with negative entries will be converted into generalized doubly
+stochastic filters according to the method described in~\cite{ref:khoury}.
+
+The following facilities are provided by NPStat for estimating
+densities with Bernstein polynomials and beta distribution kernels:
+
+\cname{BetaFilter1DBuilder} --- Constructs linear filters for
+\cname{LocalPolyFilter1D} class according to Eq.~\ref{eq:betakernels}.
+
+\cname{BernsteinFilter1DBuilder} --- Constructs linear filters for
+\cname{LocalPolyFilter1D} according to Eq.~\ref{eq:bern2}. These
+filters are intended for use with the \cname{convolve} method
+of \cname{LocalPolyFilter1D} class rather than \cname{filter} method.
+
+\cname{betaKernelsBandwidth} --- This function estimates optimal
+bandwidth (order $n$ of the generalized Bernstein polynomial)
+for Eq.~\ref{eq:betakernels}
+according to the AMISE calculation
+presented in~\cite{ref:betakern}. This bandwidth estimate should not be taken
+very seriously for realistic sample sizes, as finite sample performance
+of Bernstein polynomial methods is not well understood.
\subsection {Using Cross-Validation for Choosing the Bandwidth}
\label{sec:crossval}
Cross-validation is a techniques for adaptive bandwidth selection
applicable to both KDE and LOrPE. Two types of cross-validation
are supported by NPStat: least squares and pseudo-likelihood.
The least squares cross-validation is based on the following idea.
The MISE from Eq.~\ref{eq:mise} can be written as
\begin{eqnarray*}
\mbox{MISE}(h) & = & E \left( \int_{-\infty}^{\infty} (\hat{p}(x|h) - p(x))^2 dx \right) \\ & = & E \left( \int_{-\infty}^{\infty} \hat{p}^2(x|h) dx \right) - 2 E \left( \int_{-\infty}^{\infty} \hat{p}(x|h) p(x) dx \right) +
\int_{-\infty}^{\infty} p^2(x) dx.
\label{eq:lsqcv}
\end{eqnarray*}
The last term, $\int_{-\infty}^{\infty} p^2(x) dx$, does not depend on $h$,
so minimization of MISE is equivalent to minimization of $B(h) \equiv E \left( \int_{-\infty}^{\infty} \hat{p}^2(x|h) dx - 2 \int_{-\infty}^{\infty} \hat{p}(x|h) p(x) dx \right).$ Of course, $p(x)$ itself is unknown. However, it can be
shown (as in section 3.4.3 of~\cite{ref:silverman})
that an unbiased estimator of $B(h)$ can be constructed as
\begin{equation}
\mbox{LSCV}(h) = \int_{-\infty}^{\infty} \hat{p}^2(x|h) dx - \frac{2}{N} \sum_{i=0}^{N-1} \hat{p}_{-1,i}(x_i, h),
\label{eq:lscv}
\end{equation}
where $\hat{p}_{-1,i}(x, h)$ is a ``leaving-one-out'' density estimator
to which the point at $x_i$ does not contribute. For example, in the
case of KDE this estimator is defined by
\begin{equation}
\hat{p}_{-1,i}(x|h) = \frac{1}{(N-1) \, h} \sum_{\stackrel{j=0}{j \ne i}}^{N-1} K \left(\frac{x - x_j}{h}\right).
\end{equation}
Minimization of $\mbox{LSCV}(h)$ can lead to a reasonable bandwidth estimate,
$h\sub{LSCV}$. However, as $h\sub{LSCV}$ is itself a random quantity,
its convergence towards the bandwidth that optimizes MISE, $h\sub{MISE}$,
is known to be rather slow. Moreover, $\mbox{LSCV}(h)$ can have
multiple minima, so its minimization is best carried out by simply
scanning $h$ within a certain range in the proximity of some value
$h^{*}$ suggested by plug-in methods. For more information on the
issues related to the least squares cross-validation see
Refs.~\cite{ref:silverman, ref:kernsmooth}.
The pseudo-likelihood cross-validation (also sometimes called
likelihood cross-validation) is based on maximizing the ``leaving-one-out''
likelihood:
\begin{equation}
\mbox{PLCV}(h) = \prod_{i=0}^{N-1} \hat{p}_{-1,i}(x_i, h)
\end{equation}
The criterion of maximum $\mbox{PLCV}(h)$
can be obtained by minimizing an~approximate
Kullback-Leibler distance between the density and its
estimate~\cite{ref:silverman}.
Maximizing $\mbox{PLCV}(h)$ is only appropriate in certain
situations. In particular, whenever $\hat{p}_{-1,i}(x_i, h)$
becomes 0 even for a~single point, this criterion fails
to produce meaningful results. Its use is also problematic
for densities with infinite support due to the strong
influence fluctuations in the distribution tails
exert on $\mbox{PLCV}(h)$. Because of these problems, NPStat
implements a ``regularized'' version of $\mbox{PLCV}(h)$
defined by
\begin{equation}
\mbox{RPLCV}(h) = \prod_{i=0}^{N-1} \max \left(\hat{p}_{-1,i}(x_i, h),\, \frac{\hat{p}\sub{self,$i$}(x_i, h)}{N^{\alpha}}\right),
\label{eq:rplcv}
\end{equation}
where $\alpha$ is the regularization parameter chosen by the user
($\alpha = 1/2$ usually works reasonably well) and
$\hat{p}\sub{self,$i$}(x, h)$ is the contribution of the data point at $x_i$
into the original density estimator. For KDE, this contribution is
\begin{equation}
\hat{p}\sub{self,$i$}(x|h) = \frac{1}{N h} K \left(\frac{x - x_i}{h}\right).
\end{equation}
If the bandwidth is fixed, $\hat{p}\sub{self,$i$}(x_i|h) = K(0)/(Nh)$
for every point $x_i$.
$\hat{p}\sub{self,$i$}(x_i, h)$ changes from one point to another for
LOrPE and for variable-bandwidth KDE.
Cross-validation in the NPStat package is implemented for discretized
KDE and LOrPE density estimators.
It is assumed that the optimal bandwidth corresponds
to the maximum of some quantity, as in the case of $\mbox{RPLCV}(h)$.
All classes which perform cross-validation for univariate densities
inherit from the abstract base class \cname{AbsBandwidthCV1D}. For
multivariate densities, the corresponding base class is \cname{AbsBandwidthCVND}
(both of these base classes are declared in the header file
``npstat/stat/AbsBandwidthCV.hh''). The following concrete classes
can be used:
\cname{BandwidthCVLeastSquares1D} --- implements Eq.~\ref{eq:lscv} for
KDE and LOrPE in 1-d.
\cname{BandwidthCVLeastSquaresND} --- implements Eq.~\ref{eq:lscv} for
multivariate KDE and LOrPE.
\cname{BandwidthCVPseudoLogli1D} --- implements Eq.~\ref{eq:rplcv} for
KDE and LOrPE in 1-d.
\cname{BandwidthCVPseudoLogliND} --- implements Eq.~\ref{eq:rplcv} for
multivariate KDE and LOrPE.
\noindent The cross-validation classes are used internally by such high-level
classes as \cname{KDECopulaSmoother}, \cname{LOrPECopulaSmoother}, and
\cname{SequentialCopulaSmoother}.
\subsection {The Nearest Neighbor Method}
Using NPStat tools, a simple density estimation algorithm similar to
the $k$-nearest neighbor method~\cite{ref:silverman}
can be implemented for one-dimensional samples as follows:
\begin{thinlist}
\item Discretize the data using a finely binned histogram.
\item Convert this histogram into a distribution by constructing a \cname{BinnedDensity1D} object.
\item For any point $x$ at which a density estimate is desired,
calculate the corresponding cumulative distribution value $y = F(x)$.
\item For some interval $\Delta$, $0 < \Delta < 1$, estimate the density
at $x$ by
\begin{equation}
\hat{p}\sub{NN}(x) = \frac{\Delta}{q(y + \Delta/2)\, - \,q(y - \Delta/2)},
\end{equation}
where $q(y)$ is the quantile function: $q(F(x)) = x$.
This formula assumes $y - \Delta/2 \ge 0$ and $y + \Delta/2 \le 1$.
If $y + \Delta/2 > 1$ then the $\hat{p}\sub{NN}(x)$ denominator should
be replaced by $q(1)\, - \,q(1 - \Delta)$, and if $y - \Delta/2 < 0$
then the denominator should become $q(\Delta)\, - \,q(0)$.
\end{thinlist}
In this approach, the parameter $\Delta$ plays the same role as the $k/N$ ratio
in the standard $k$-nearest neighbor method.
For best results, $\Delta$ should scale with the number
of sample points as $N^{-1/5}$,
and the optimal constant of proportionality depends on
the estimated density itself~\cite{ref:silverman}.
The $k$-nearest neighbor method (and its modification just described)
is not recommended for estimating
densities with infinite support as it leads to a diverging
density integral.
For multivariate samples, a similar estimate can be constructed with
the help of \cname{HistoNDCdf} class. Its method \cname{coveringBox}
can be used to find the smallest $d$-dimensional box with the given center
and fixed proportions which encloses the desired sample fraction.
\section{Nonparametric Regression}
\label{sec:npregression}
``Regression'' refers to a form of data analysis
in which the behavior of the dependent variable, called ``response'',
is deduced as a function of the independent variable, called ``predictor'',
from a sample of observations.
The response values are considered random ({\it e.g.}, contaminated by
noise) while the predictor values are usually assumed to be deterministic.
The analysis purpose is thus to determine the location parameter
of the response distribution
(mean, median, mode, {\it etc}) as a function of the predictor.
In the NPStat algorithms, the response is always assumed to be a
univariate quantity while the predictor can be either univariate or
multivariate. In the discussion below, predictor will be denoted
by ${\bf x}$, response by $y$, $\mu({\bf x})$ will be used
to describe the response location function, and $\hat{\mu}({\bf x})$
will denote an~estimator of $\mu({\bf x})$.
``Nonparametric regression'' refers to a form of regression analysis
in which no global parametric model is postulated for $\mu({\bf x})$.
Instead, for every ${\bf x}\sub{fit}$, $\mu({\bf x})$ is described
in the vicinity of ${\bf x}\sub{fit}$ by a relatively
simple model which is fitted using sample points located nearby.
Further discussion of $\mu({\bf x})$ estimation depends critically
on the assumptions which can be made about the distribution of
response values.
\subsection{Local Least Squares}
With the additional assumption of Gaussian error distribution ({\it i.e.,}
$y_i = \mu({\bf x}_i) + \epsilon_i$, where $\epsilon_i$ are normally distributed
with mean 0 and standard deviation $\sigma_i$), the
model fitting can be efficiently performed by the method of local
least squares. In this method, $\hat{\mu}({\bf x})$ is found by
minimizing the quantity:
\begin{equation}
\chi^2({\bf x}\sub{fit}, h) = \sum_{i=0}^{N-1} \left(\frac{y_i - \hat{\mu}({\bf x}_i|{\bf x}\sub{fit},h)}{\sigma_i}\right)^2 K_h({\bf x}_i - {\bf x}\sub{fit}).
\label{eq:localleastsq}
\end{equation}
Here, $h$ refers to one or more parameters which determine
the extent of the kernel $K_h({\bf x})$.
In NPStat, $\hat{\mu}({\bf x}|{\bf x}\sub{fit},h)$ is usually decomposed
into orthogonal polynomials. In the case of univariate predictor,
\begin{equation}
\hat{\mu}(x|x\sub{fit},h) = \sum_{k=0}^{M} \hat{a}_k(x\sub{fit},h) P_k\left(\frac{x - x\sub{fit}}{h}\right),
\label{eq:regmodel}
\end{equation}
where polynomials $P_k(x)$ are subject to normalization condition~\ref{eq:norm0}.
The expansion coefficients $\hat{a}_k(x\sub{fit},h)$ are determined from
the equations $\partial \chi^2(x\sub{fit},h)/ \partial \hat{a}_k = 0$
which leads to $M + 1$ simultaneous equations for $k = 0, 1, ..., M$:
\begin{equation}
\sum_{i=0}^{N-1} \frac{1}{\sigma_i^2} \left( y_i - \sum_{j=0}^{M} \hat{a}_j(x\sub{fit},h) P_j\left(\frac{x_i - x\sub{fit}}{h}\right) \right) K\left(\frac{x_i - x\sub{fit}}{h}\right) P_k\left(\frac{x_i - x\sub{fit}}{h}\right) = 0.
\label{eq:regsystem}
\end{equation}
If the predictor values $x_i$ are specified on a regular grid of points
then the discretized version of Eq~~\ref{eq:norm0} is just
\begin{equation}
\frac{h_g}{h} \sum_{i=0}^{N-1} P_j\left(\frac{x_i - x\sub{fit}}{h} \right) K\left(\frac{x_i - x\sub{fit}}{h}\right) P_k\left(\frac{x_i - x\sub{fit}}{h}\right) = \delta_{jk},
\end{equation}
where $h_g$ is the distance between any two adjacent values of $x_i$.
This leads to a particularly simple solution for $\hat{a}_k(x\sub{fit},h)$
if the model can be assumed at least locally homoscedastic
({\it i.e.,} if all $\sigma_i$ are the same in the vicinity of $x\sub{fit}$):
\begin{equation}
\hat{a}_k(x\sub{fit},h) = \frac{h_g}{h} \sum_{i=0}^{N-1} y_i K\left(\frac{x_i - x\sub{fit}}{h}\right) P_k\left(\frac{x_i - x\sub{fit}}{h}\right).
\label{eq:regconvol}
\end{equation}
Substituting this into Eq.~\ref{eq:regmodel}, one gets
\begin{equation}
\hat{\mu}(x\sub{fit} | h) = \frac{h_g}{h} \sum_{i=0}^{N-1} \sum_{k=0}^{M} y_i K\left(\frac{x_i - x\sub{fit}}{h}\right) P_k\left(\frac{x_i - x\sub{fit}}{h}\right) P_k(0).
\label{eq:effreg}
\end{equation}
If $K(x)$ is an even function and $x$ is far away from
the boundaries of the interval on which the regression is performed,
Eq.~\ref{eq:effreg} is equivalent to the well-known Nadaraya-Watson estimator,
\begin{equation}
\hat{\mu}_{NW}(x\sub{fit} | h) = \frac{\sum_{i=0}^{N-1} y_i K\sub{eff}\left(\frac{x_i - x\sub{fit}}{h}\right)}{\sum_{i=0}^{N-1} K\sub{eff}\left(\frac{x_i - x\sub{fit}}{h}\right)},
\end{equation}
with the effective kernel given by
\begin{equation}
K\sub{eff}(x) = \sum_{k=0}^{M} P_{k}(0) P_{k}(x) K(x).
\end{equation}
Just as in the case of LOrPE, a taper function can be introduced
for this kernel which leads to Eq~\ref{eq:effk}.
If the predictor values $x_i$ are arbitrary or if the model can not
be considered homoscedastic even locally, there is no
simple formula which solves the linear system~\ref{eq:regsystem}.
In this case the equations must be solved numerically.
To perform this calculation, NPStat calls appropriate routines
from LAPACK~\cite{ref:lapack}.
% Modeling the response by orthogonal polynomial series
% (as opposed to monomial expansion) helps to prevent ill-conditioning
% of the system and significantly improves the numerical stability
% of the solutions.
Generalization of local least squares methods to multivariate predictors
is straightforward: one simply switches to multivariate kernels
and polynomial systems.
The following NPStat classes can be used to perform local least
squares regression of locally homoscedastic polynomial models
on regular grids:
\cname{LocalPolyFilter1D}, \cname{LocalPolyFilterND}, and \cname{SequentialPolyFilterND} --- these
classes have already been mentioned in Section~\ref{sec:lorpe}.
It turns out that LOrPE of discretized data essentially amounts to
local least squares regression on histogram bin contents. To see that,
compare Eqs.~\ref{eq:regconvol} and~\ref{eq:convol}.
Up to an overall normalization
constant, Eq.~\ref{eq:regconvol} is just a discretized version of Eq.~\ref{eq:convol}.
In fact, all three ``PolyFilter'' classes actually perform local least squares regression
which, in the signal analysis terminology, is a linear filtering procedure. If the result
is to be treated as a density, it has to be truncated below zero and renormalized by the user.
\cname{QuadraticOrthoPolyND} --- this class supports a finer interface to the
local least squares regression functionality on a grid than \cname{LocalPolyFilterND},
but only for polynomials up to second degree. In addition to the response
itself, this class can be used to calculate the gradient
and the hessian of the local response surface defined by
Eq.~\ref{eq:regmodel}\footnote{This can be useful, for example,
for summarizing properties of log-likelihoods defined on grids
in the space of estimated parameters for some parametric
statistical models.}.
The predictor/response data can be
provided by a method of some class (callback)
which is sometimes more convenient
than using just a~grid of points.
\cname{LocalQuadraticLeastSquaresND} --- this class fits local
least squares regression models for arbitrary multivariate predictor values
(no longer required to be on a grid).
The models can be heteroscedastic, as in the most general case of Eq.~\ref{eq:regsystem}.
The polynomials can be at most quadratic. Calculation of the gradient
and the hessian of the local response surface is supported.
\subsection{Local Logistic Regression}
\label{sec:locallog}
For regressing binary response variables,
NPStat implements a method known as ``local quadratic logistic regression''
(LQLR). This method is a trivial extension of the local linear
logistic regression originally proposed in~\cite{ref:localreg}.
In this type of analysis, ``response'' is the probability of
success in a Bernoulli trial, estimated as a function of one or more
predictors. Due to the manner in which it is often used, this probability
will be called ``efficiency'' for the remainder of this section.
In the LQLR model, the efficiency dependence on ${\bf x}$ is represented by
\begin{equation}
\epsilon({\bf x}) = \frac{1}{1 + e^{-P({\bf x})}}
\end{equation}
where $P({\bf x})$ is a multivariate quadratic polynomial whose
coefficients are determined at each predictor value ${\bf x}\sub{fit}$
by maximizing the local log-likelihood:
\begin{equation}
L({\bf x}\sub{fit}, h) = \sum_{i=0}^{N-1} K_h({\bf x}_{i} - {\bf x}\sub{fit}) \left[y_i \ln (\hat{\epsilon}({\bf x}_{i})) + (1 - y_i) \ln (1 - \hat{\epsilon}({\bf x}_{i})) \right].
\label{eq:lqlrlogli}
\end{equation}
Here,
$K_h({\bf x}_{i} - {\bf x}\sub{fit})$ is a~suitable localizing kernel which decays to 0
when ${\bf x}_{i}$ is far from ${\bf x}\sub{fit}$, and $y_i$ are the observed values
of the Bernoulli trial: 1 if the point ``passes'' and 0 if it ``fails''.
The local log-likelihood~\ref{eq:lqlrlogli} is very similar to the one
implemented in the Locfit package~\cite{ref:locfit}, the only difference
is that orthogonal polynomials are used in NPStat to construct
$P({\bf x})$ series instead of monomials.
Setting partial derivatives of $L({\bf x}\sub{fit}, h)$ with respect to
polynomial coefficients to 0 results in a system of nonlinear
equations for these coefficients. Solving such a system of equations
does not appear to be any easier than dealing with the original log-likelihood
optimization problem by applying, let say, the Levenberg-Marquardt
algorithm~\cite{ref:lvm}.
NPStat includes facilities for efficient calculation of the LQLR
log-likelihood together with
its gradient with respect to $P({\bf x})$ expansion coefficients.
This code does not rely on any specific optimization solver,
and can be easily interfaced to a~number of different external
optimization tools. The relevant classes are:
\cname{LogisticRegressionOnKDTree} (header file ``npstat/stat/LocalLogisticRegression.hh'') --- this class calculates the log-likelihood
from Eq.~\ref{eq:lqlrlogli} in the assumption that the kernel $K_h({\bf x})$
has finite support. In this case iterating over all $N$ points in the sample
becomes rather inefficient: there is no reason to cycle over values of
${\bf x}_{i}$ far away from ${\bf x}\sub{fit}$ because
$K_h({\bf x}_{i} - {\bf x}\sub{fit})$ is identically zero for all such points.
To automatically restrict the iterated range, the predictor values
are arranged into a space-partitioning data structure known
as $k$-$d$~tree~\cite{ref:kdtree} which is implemented in NPStat with the
\cname{KDTree} class (header file ``npstat/nm/KDTree.hh'').
\cname{LogisticRegressionOnGrid} (header file ``npstat/stat/LocalLogisticRegression.hh'') --- this class calculates the log-likelihood
from Eq.~\ref{eq:lqlrlogli} in the assumption that the predictor values
are histogrammed. Two identically binned histograms must be available: the one
with all values of $y_i$ (``the denominator'') and the one which collects
only those ${\bf x}_{i}$ for which $y_i = 1$ (``the numerator''). Naturally,
only the bins sufficiently close to ${\bf x}\sub{fit}$ are
processed when the LQLR log-likelihood is evaluated for these histograms.
The ``interfaces'' directory of the NPStat package includes two
high-level driver functions for fitting LQLR response
surfaces (header file
``npstat/interfaces/minuitLocalRegression.hh'').
These functions employ a general-purpose
optimization package Minuit~\cite{ref:minuit} for maximizing
the log-likelihood. The names of the functions are
\cname{minuitUnbinnedLogisticRegression}
(intended for use with \cname{LogisticRegressionOnKDTree})
and \cname{minuitLogisticRegressionOnGrid}
(for use with \cname{LogisticRegressionOnGrid}).
To use these functions, the Minuit
package has to be compiled and linked together
with the user code which provides the data
and calls the functions.
\subsection{Local Quantile Regression}
\label{sec:localqreg}
The method of least squares allows us to solve the
problem of response mean determination in the regression
context. By recasting calculation of the sample
mean as a minimization problem, we have gained the
ability to condition the mean
on the value of the predictor. In a~similar manner,
the method of least absolute deviations can be used
to determine conditional median.
In the method of least squares,
the expression $S(f) = \sum_{i} f(y_i - \hat{\mu}({\bf x_i}))$
is minimized, with $f(t) = t^2$. The method of least absolute
deviations differs only by setting $f(t) = |t|$.
Moreover, just like the problem of determination
of response mean can be localized
by introducing a kernel in the predictor space
(resulting in local least squares, as in Eq.~\ref{eq:localleastsq}),
the problem of response median determination
can be subjected to the same localization treatment.
As a method of determination of response location,
local median regression is extremely robust (insensitive to outliers).
Not only the median but an arbitrary
distribution quantile can also be determined in this manner.
The corresponding function to use is
\begin{equation}
f_q(t) = q \,t \,I(t \ge 0) - (1 - q) \,t \,I(t < 0),
\end{equation}
where $q$ is the cumulative distribution value of interest, $0 < q < 1$.
You can easily convince yourself of the validity of this statement
as follows. For a sample of points ${y_0, ..., y_{N-1}}$,
define $t = y - y_q$, where $y_q$ is a parameter on which $S(f_q)$ depends.
The condition for the minimum of $S(f_q)$, $\frac{d S(f_q)}{d y_q} = 0$,
is then equivalent to $\frac{d S(f_q)}{d t} = 0$.
As $\frac{d f_q(t)}{d t} = q \,I(t > 0) - (1 - q) \,I(t < 0)$, the minimum
of $S(f_q)$ is reached when
\begin{equation}
q \sum_{i=0}^{N-1} I(y_i > y_q) - (1 - q) \sum_{i=0}^{N-1} I(y_i < y_q) = 0.
\label{eq:qmin}
\end{equation}
This equation is solved when the number of sample points for which
$y_i > y_q$ is $(1 - q) N$ and the number of sample points for which
$y_i < y_q$ is $q N$. But this is precisely the definition of
the sample quantile which, in the limit $N \rightarrow \infty$, becomes
the population quantile of interest.
Unfortunately, solving Eq.~\ref{eq:qmin} numerically
in the regression context is usually significantly more
challenging than solving the corresponding $\chi^2$ minimization
problem. For realistic finite samples, $\frac{d S(f_q)}{d y_q}$ is not
a continuous function, and Eq.~\ref{eq:qmin} can have multiple solutions.
This basically rules out the use of standard gradient-based methods
for $S(f_q)$ minimization.
The following NPStat classes facilitate the solution of the local
quantile regression problem:
\cname{QuantileRegression1D} --- calculates the expression to
be minimized (also called the ``loss function'') for a single univariate
predictor value $x\sub{fit}$. This expression looks as follows:
\begin{equation}
S\sub{LQR}(x\sub{fit}) = \sum_{i=0}^{N-1} f_q(y_i - \hat{y}_q(x_i|x\sub{fit})) w_i,
\end{equation}
where weights $w_i$ are provided by the user (for example, these
weights can be calculated as $w_i = K((x_i - x\sub{fit})/h)$, but more sophisticated
weighting schemes can be applied as well). The quantile dependence on the
predictor is modeled by
\begin{equation}
\hat{y}_q(x|x\sub{fit}) = \sum_{k=0}^{M} \hat{a}_k(x\sub{fit}) P_k\left(\frac{x - x\sub{fit}}{h}\right),
\label{eq:quantpred}
\end{equation}
where $P_k(x)$ are either Legendre or Gegenbauer polynomials. The use of Legendre
polynomials is appropriate for a global fit of the regression curve, while
Gegenbauer polynomials are intended for use in combination with symmetric beta
kernels that generate localizing weights~$w_i$. The expansion coefficients
$\hat{a}_k(x\sub{fit})$ are to be determined by minimizing $S\sub{LQR}(x\sub{fit})$.
\cname{QuantileRegressionOnKDTree} (header file ``npstat/stat/LocalQuantileRegression.hh'')
--- calculates the quantile regression
loss function for a multivariate predictor (the class method
which returns it is called \cname{linearLoss}):
\begin{equation}
S\sub{LQR}({\bf x}\sub{fit}, h) = \sum_{i=0}^{N-1} f_q(y_i - \hat{y}_q({\bf x}_i|{\bf x}\sub{fit})) K_h({\bf x}_i - {\bf x}\sub{fit}).
\label{eq:quantregnd}
\end{equation}
The quantile dependence on the predictor is modeled by an expansion similar to
Eq.~\ref{eq:quantpred} in which multivariate orthogonal polynomials
(up to second order) are generated by $K_h({\bf x})$ used as the weight
function. The expansion coefficients are to be determined for each
${\bf x}\sub{fit}$ separately by minimizing
$S\sub{LQR}({\bf x}\sub{fit}, h)$. The predictor values are arranged into
a~$k$-$d$~tree structure for reasons similar to those mentioned when
the \cname{LogisticRegressionOnKDTree} class was described.
\cname{QuantileRegressionOnHisto} (header file ``npstat/stat/LocalQuantileRegression.hh'')
--- calculates loss function~\ref{eq:quantregnd}
in the assumption that the response is histogrammed on a regular
grid of predictor values (so that the input histogram dimensionality is larger
by one than the dimensionality of the predictor variable).
\cname{CensoredQuantileRegressionOnKDTree} (header ``npstat/stat/ensoredQuantileRegression.hh'')
--- this class constructs an
appropriate quantile regression loss function in case some of the response
values are unavailable due to censoring ({\it i.e.,} there is a cutoff
from above or from below on the response values).
For example, this situation can occur in modeling of jet response of a
particle detector when jets with visible energy below certain cutoff
are not reconstructed due to limitations in the clustering procedure.
It is assumed that the censoring efficiency ({\it i.e.,} the fraction
of points not affected by the cutoff) can be determined as a~function
of the predictor by other techniques, like the local logistic regression
described in section~\ref{sec:locallog}. It is also assumed
that the cutoff value is known as a~function of the predictor, and that
the presence of this cutoff is the only reason for inefficiency.
The appropriate loss function in this case is
\begin{equation}
S\sub{CLQR}({\bf x}\sub{fit}, h) = \sum_{i=0}^{N\sub{p}-1} \left[f_q(y_i - \hat{y}_q({\bf x}_i|{\bf x}\sub{fit})) + \frac{1 - \epsilon_i}{\epsilon_i} g_q(\epsilon_i, y_{\mbox{\scriptsize cut},i} - \hat{y}_q({\bf x}_i|{\bf x}\sub{fit}))\right] K_h({\bf x}_i - {\bf x}\sub{fit}),
\label{eq:clqr}
\end{equation}
where the summation is performed only over the $N\sub{p}$ points
in the sample surviving the cutoff. $\epsilon_i$ is the censoring
efficiency for the given ${\bf x}_i$. The function
$g_q(\epsilon, t)$ is defined differently for right-censored (R-C) samples
in which surviving points are below the cutoff and left-censored (L-C) samples
in which surviving points are above the cutoff:
\begin{equation}
g_{q,\mbox{\scriptsize R-C}}(\epsilon, t) = I(q < \epsilon) f_q(t) + I(q \ge \epsilon) \left(\frac{q - \epsilon}{1 - \epsilon} f_q(t) + \frac{1 - q}{1 - \epsilon} f_q(+\infty) \right)
\label{eq:clqrrc}
\end{equation}
\begin{equation}
g_{q,\mbox{\scriptsize L-C}}(\epsilon, t) = I(q > 1 - \epsilon) f_q(t) + I(q \le 1 - \epsilon) \left( \frac{1 - \epsilon - q}{1 - \epsilon} f_q(t) + \frac{q}{1 - \epsilon} f_q(-\infty) \right)
\label{eq:clqrlc}
\end{equation}
Formulae~\ref{eq:clqr}, \ref{eq:clqrrc}, and~\ref{eq:clqrlc} were inspired
by Ref.~\cite{ref:localquantreg}. They can be understood as follows. Consider, for example,
a right-censored sample. If this sample was not censored, the value of the
response cumulative distribution function at $y_{\mbox{\scriptsize cut},i}$
would be equal $\epsilon_i$.
For each point with response $y_i$ below the cutoff,
there are $(1 - \epsilon_i)/\epsilon_i$ unobserved points above the cutoff (and $1/\epsilon_i$ points total).
Before localization, points below
the cutoff contribute the usual amount $f_q(y_i - \hat{y}_q({\bf x}_i|{\bf x}\sub{fit}))$
into $S\sub{CLQR}({\bf x}\sub{fit}, h)$ (compare with
Eq.~\ref{eq:quantregnd}). To the points above cutoff, we assign
the value of response which equals either
$y_{\mbox{\scriptsize cut},i}$ or $+\infty$, in such a manner that the estimate
$\hat{y}_q({\bf x}_i|{\bf x}\sub{fit})$ is ``pushed'' in the
right direction and by the right amount when it crosses $y_{\mbox{\scriptsize cut},i}$.
If the estimated quantile is less than the efficiency, all unobserved points
are assigned the response value $y_{\mbox{\scriptsize cut},i}$
(this corresponds to the term $I(q < \epsilon) f_q(t)$ in the Eq.~\ref{eq:clqrrc}). This is
because we know that the correct value of $\hat{y}_q({\bf x}_i|{\bf x}\sub{fit})$
should be below $y_{\mbox{\scriptsize cut},i}$, so the penalty for placing
$\hat{y}_q({\bf x}_i|{\bf x}\sub{fit})$ above the cutoff is generated
by all points, including the unobserved ones. If the chosen quantile is
larger than the efficiency, the only thing we know is that the correct value of
$\hat{y}_q({\bf x}_i|{\bf x}\sub{fit})$ should be inside the interval
$(y_{\mbox{\scriptsize cut},i}, +\infty)$. There is no reason to prefer
any particular value from this interval, so the overall contribution of
sample point $i$ for which $\hat{y}_q({\bf x}_i|{\bf x}\sub{fit}) \in (y_{\mbox{\scriptsize cut},i}, +\infty)$
into $S\sub{CLQR}({\bf x}\sub{fit}, h)$ must not depend on
$\hat{y}_q({\bf x}_i|{\bf x}\sub{fit})$\footnote{If most points ${\bf x}_i$
are like that for some ${\bf x}\sub{fit}$ value, the fit becomes
unreliable. Consider increasing the bandwidth of the localization
kernel or avoid such situations altogether. You should not expect
to obtain good results for high (low) $q$ values and all possible
${\bf x}\sub{fit}$ in a right (left)-censored sample.}
We do, however, want to prevent $\hat{y}_q({\bf x}_i|{\bf x}\sub{fit})$ from leaving
this interval. This is achieved by placing so many points at $y_{\mbox{\scriptsize cut},i}$
that the fraction of sample points (including both observed and unobserved ones)
at or below $y_{\mbox{\scriptsize cut},i}$ is exactly $q$, and this is precisely
what the term proportional to $I(q \ge \epsilon)$ does in Eq.~\ref{eq:clqrrc}.
Similar reasoning applied to a left-censored sample leads to Eq.~\ref{eq:clqrlc}.
Naturally, in the computer program, $-\infty$ and $+\infty$ in Eqs.~\ref{eq:clqrrc} and~\ref{eq:clqrlc}
should be replaced by suitable user-provided numbers which are known to be below and above,
respectively, all possible response values. In order to avoid deterioration
in $S\sub{CLQR}({\bf x}\sub{fit}, h)$ numerical precision, these numbers should not be
very different from the minimum and maximum observed response.
\cname{CensoredQuantileRegressionOnHisto} (header ``npstat/stat/ensoredQuantileRegression.hh'')
--- this class calculates
the loss function~\ref{eq:clqr}
in the assumption that all information about response, efficiency, and cutoffs is
provided on a~regular grid in the space of predictor values.
The ``interfaces'' directory of the NPStat package includes several high-level
driver functions for performing local quantile regression. These driver functions
use the simplex minimization method of the Minuit package to perform local fitting of the
quantile curve expansion coefficients.
\cname{minuitLocalQuantileRegression1D} function uses \cname{QuantileRegression1D}
class internally to perform local quantile regression with one-dimensional predictors.
This driver function can be used, for example, to construct Neyman belts from
numerical simulations of some statistical estimator.
A similar function, \cname{weightedLocalQuantileRegression1D}, can be used
to perform local quantile regression with one-dimensional predictors when
the points are weighted.
The \cname{minuitQuantileRegression} driver function can use one
of the \cname{QuantileRegressionOnKDTree}, \cname{QuantileRegressionOnHisto},
\cname{CensoredQuantileRegressionOnKDTree}, or
\cname{CensoredQuantileRegressionOnHisto} classes
(all of which inherit from a common base)
to perform local quantile regression with multivariate predictors.
The function \cname{minuitQuantileRegressionIncrBW} has similar
functionality but it can also automatically increase the localization kernel
bandwidth so that not less than a certain predefined fraction of the whole
sample participates in the the local quantile determination
for each ${\bf x}\sub{fit}$.
\subsection{Iterative Local Least Trimmed Squares}
The NPStat package includes an implementation of an iterative
local least trimmed
squares (ILLTS) algorithm applicable when predictor/response
values are supplied on a regular grid of points in a multivariate
predictor space (think image denoising). The algorithm operation
consists of the following steps:
\begin{enumerate}
\item Systems of orthogonal polynomials up to user-defined degree $M$
are constructed using weight functions $K_{h,-j}({\bf x})$.
These weight functions are defined using symmetric
finite support kernels in which the point in the kernel
center together with $j - 1$ other points are set to 0.
Imagine, for example, a $5 \times 5$ grid in two dimensions.
Start with the uniform $K_h({\bf x})$ kernel which is 1 at every
grid point. The $K_{h,-1}({\bf x})$ weight function
is produced from $K_h({\bf x})$ by setting the central grid
cell to 0. 24 weight functions of type $K_{h,-2}({\bf x})$
are produced from $K_{h,-1}({\bf x})$ by setting one other
grid cell to 0, in addition to the central one. 276
weight functions of type $K_{h,-3}({\bf x})$ are produced
from $K_{h,-1}({\bf x})$ by choosing 2 out of 24 remaining
cells which are set to 0, and so on.
\item Local least squares regression is performed using all
polynomial systems generated by $K_{h,-j}({\bf x})$ weights
for all possible positions of the kernel inside the predictor
grid (sliding window)\footnote{Edge effects are taking into
account by constructing special polynomial systems
which use boundary kernels similar to
$K_{h,-j}({\bf x})$ but with different placement of zeros.}.
For each kernel position,
we find the polynomial system which produces the best
$\chi^2$ calculated over grid points for
which the weight function is not 0. For this polynomial
system, we determine the value
$\Delta = |y_c - y_{c,\mbox{\scriptsize fit}}|$ for the
kernel center, where $y_c$ is the response value in
the data sample and $y_{c,\mbox{\scriptsize fit}}$ is
the response value produced by the local least squares fit.
\item The position of the kernel is found for which
$\Delta$ is the largest in the whole data sample.
\item The response for the position found
in the previous step is adjusted by setting it
to the value fitted at the kernel center.
\item $\Delta$ is recalculated for all kernel positions affected by
the adjustment performed in the previous step.
\item The previous three steps are repeated until some stopping
criterion is satisfied. For example, the requirement that
the largest $\Delta$ in the grid becomes small (below certain
cutoff) can serve as such a stopping criterion.
\end{enumerate}
The ILLTS algorithm works best if the fraction of outliers in the
sample is relatively small and the response errors are homoscedastic.
ILLTS is expected to be more efficient (in the statistical sense) than the local
quantile regression. If the spectrum of response errors is not known
in advance, it becomes very instructive to plot the history
of $\Delta$ values for which response adjustments were performed.
This plot often exhibits two characteristic ``knees'' which
correspond to the suppression of outliers and suppression of ``normal''
response noise. The procedure can be stopped somewhere
between these knees and followed up by normal local
least squares on the adjusted sample, perhaps utilizing
different bandwidth.
Unfortunately, computational complexity of the ILLTS
algorithm is increasing exponentially
with increasing $j$, so only very small values of $j$ are
practical. The inability to use high values of $j$ can
be partially compensated for by choosing smaller bandwidth
values and performing more iteration cycles. More cycles
result in larger {\it effective} bandwidth --- think, for example,
what happens when you perform local least squares multiple
times. For certain kernels like Gaussian or Cauchy
({\it i.e.,} stable distributions) and polynomial degree $M = 0$ (local
constant fit),
this is exactly equivalent to choosing larger bandwidth.
For other types of kernels not only the effective bandwidth increases
with the number of passes
but also the effective kernel shape gets modified\footnote{ILLTS adds
the dimension of time (adjustment cycle number) to the
solution of robust regression problem. It would be interesting to explore
this, for example, by introducing time-dependent bandwidth or
by studying the connection between the ILLTS updating scheme and
the numerous updating schemes employed in solving
partial differential equations.}.
The top-level API function for running the ILLTS algorithm is called
\cname{griddedRobustRegression}. The $K_{h,-1}({\bf x})$ weights
can be used by supplying an object of \cname{WeightedLTSLoss} type
as its loss calculator argument, and $K_{h,-2}({\bf x})$ weights
are used by choosing \cname{TwoPointsLTSLoss} instead. A simple
stopping criterion based on the $\Delta$ value, local least trimmed
squares $\chi^2$, or the number of adjustment cycles can be specified
with an object of \cname{GriddedRobustRegressionStop} type.
The \cname{griddedRobustRegression} implementation
is rather general, and can accept user-developed loss calculators and
stopping criteria.
\subsection{Organizing Regression Results}
It is usually desirable to calculate the regression
surface on a reasonably fine predictor grid and save the
result of this calculation for subsequent
fast lookup. A general inerface for such a lookup is provided
by the \cname{AbsMultivariateFunctor} class
(header file ``npstat/nm/AbsMultivariateFunctor.hh''). This class
can be used to represent results of both parametric and
nonparametric fits.
Persistent classes \cname{StorableInterpolationFunctor} and
\cname{StorableHistoNDFunctor} derived from \cname{AbsMultivariateFunctor}
are designed to represent
nonparametric regression results. Both of these classes
assume that the regression was
performed on a (hyper)rectangular grid of predictor
points. \cname{StorableInterpolationFunctor}
supports multilinear interpolation and extrapolation of results,
with flexible extrapolation (constant or linear) for each
dimension beyond the grid boundaries. This class essentially
combines the \cname{AbsMultivariateFunctor} interface with
the functionality of the \cname{LinInterpolatedTableND} class
discussed in more detail in Section~\ref{sec:utilities}.
The class \cname{StorableHistoNDFunctor} allows for constant,
multilinear, and multicubic interpolation\footnote{Multicubic
interpolation is supported for uniform grids only.}
inside the grid boundaries, and for constant extrapolation outside
the boundaries (the extrapolated response value is set to its value at
the closest boundary point). Both \cname{StorableInterpolationFunctor}
and \cname{StorableHistoNDFunctor} support arbitrary transformations
of the response variable via a user-provided functor. If a
transformation was initially applied to the
response values in order to simplify subsequent modeling,
this is a good place to perform the inverse.
\section{Pseudo- and Quasi-Random Numbers}
\label{sec:randomgen}
The C++11 Standard~\cite{ref:cpp11} defines an API for generating
pseudo-random numbers. Unfortunately, this API
suffers from a disconnect between modeling of statistical
distributions (including densities, cumulative distributions, {\it etc})
and generation of random numbers. Moreover,
quasi-random numbers~\cite{ref:Niederreiter} useful for
a large variety of simulation and data analysis purposes
are not represented by the Standard,
and there is no meaningful support for generating genuinely
multivariate random sequences.
NPStat adopts a different approach towards generation of pseudo-random,
quasi-random,
and non-random sequences ---
the one that is more appropriate in the context
of a~statistical
analysis package. A small number of high-quality generators is
implemented for
producing such sequences on a unit $d$-dimensional cube.
All such generators inherit from the same abstract base class
\cname{AbsRandomGenerator} (header file ``npstat/rng/AbsRandomGenerator.hh'').
Generators developed or ported by users can be seamlessly
added as well. Conversion of uniformly distributed sequences into other
types of distributions and to different support regions is performed
by the classes that represent statistical distributions --- in particular,
by those classes which inherit from \cname{AbsDistribution1D} and
\cname{AbsDistributionND} bases. Both of these bases have a virtual
method \cname{random} which takes a sequence generator instance as an
input and produces correspondingly distributed numbers (random or not)
on output.
By default, transformation of sequences is performed by the \cname{quantile}
method for one-dimensional distributions and by \cname{unitMap}
method for multivariate ones. However, it is expected that the
\cname{random} method itself will be overriden by the derived classes
when it is easier, for example, to make
a random sequence with the desired properties by the acceptance-rejection
technique. The following functions and classes are implemented:
\cname{MersenneTwister} (header file ``npstat/rng/MersenneTwister.hh'')
--- generates pseudo-random numbers using the
Mersenne Twister algorithm~\cite{ref:mercenne}.
\cname{SobolGenerator} (header file ``npstat/rng/SobolGenerator.hh'')
--- generates Sobol quasi-random
sequences~\cite{ref:sobol}.
\cname{HOSobolGenerator} (header file ``npstat/rng/HOSobolGenerator.hh'')
--- generates higher order scrambled Sobol
sequences~\cite{ref:hosobol}.
\cname{RandomSequenceRepeater} (header file ``npstat/rng/RandomSequenceRepeater.hh'')
--- this class can be used to produce
multiple repetitions of sequences created by other generators
whenever an instance of \cname{AbsRandomGenerator} is needed.
The whole
sequence is simply remembered (which can take a significant
amount of memory
for large sequences) and extended as necessary
by calling the original generator.
\cname{WrappedRandomGen} (header file ``npstat/rng/AbsRandomGenerator.hh'')
--- a simple adaptor class for ``old style''
random generator functions like ``drand48()''. Implements
\cname{AbsRandomGenerator} interface.
\cname{CPP11RandomGen} (header file ``npstat/rng/CPP11RandomGen.hh'')
--- a simple adaptor class for pseudo-random
generator engines defined in the C++11 standard. Implements
\cname{AbsRandomGenerator} interface.
\cname{EquidistantSampler1D} (header file ``npstat/rng/EquidistantSampler1D.hh'')
--- generates a sequence of equidistant
points, similar to bin centers of a~histogram with axis limits at 0 and 1.
\cname{RegularSampler1D} (header file ``npstat/rng/RegularSampler1D.hh'')
--- generates a sequence of points by
splitting the $[0, 1]$ interval by 2, then splitting all subintervals
by 2, {\it etc}. The points returned are the split locations.
Useful for generating $2^{k} - 1$ points when integer
$k$ is not known in advance.
\cname{convertToSphericalRandom} (header file ``npstat/rng/convertToSphericalRandom.hh'')
--- converts a multivariate random
number from a unit $d$-dimensional cube into a random direction
in $d$ dimensions and one additional random number between 0 and 1.
Useful for generating random numbers according to spherically
symmetrical distributions.
\section{Algorithms Related to Combinatorics}
The NPStat package implements several functions and algorithms related to
permutations of integers and combinatorics:
\cname{factorial} (header file ``npstat/rng/permutation.hh'')
--- this function returns an exact factorial if the
result does not exceed the largest unsigned long (up to $12!$ on 32-bit
systems and up to $20!$ on 64-bit ones).
\cname{ldfactorial} (header file ``npstat/rng/permutation.hh'')
--- this function returns an approximate
factorial up to $1754!$ as a long double.
\cname{logfactorial} (header file ``npstat/rng/permutation.hh'')
--- natural logarithm of a factorial, up to $\ln((2^{32} - 1)!)$,
as a long double.
\cname{binomialCoefficient} (header file ``npstat/nm/binomialCoefficient.hh'')
--- calculates the binomial coefficients
$C_M^N = \frac{N!}{M! (N-M)!}$ using an algorithm which avoids
overflows.
\cname{orderedPermutation} (header file ``npstat/rng/permutation.hh'')
--- this function can be used to iterate
over permutations of numbers $\{0, 1, ..., N-1\}$ in a systematic way.
It generates a unique permutation of such numbers given a~non-negative input
integer below $N!$.
\cname{permutationNumber} (header file ``npstat/rng/permutation.hh'')
--- inverse of \cname{orderedPermutation}:
maps a permutation of numbers $\{0, 1, ..., N-1\}$ into a unique
integer between $0$ and $N!$.
\cname{randomPermutation} (header file ``npstat/rng/permutation.hh'')
--- generates random permutations of numbers
$\{0, 1, ..., N-1\}$ in which probability of every permutation
is the same.
\cname{NMCombinationSequencer} (header file ``npstat/stat/NMCombinationSequencer.hh'')
--- this class iterates over all
possible choices of $j_1, ..., j_M$ from
$N$ possible values for each $j_k$ in such a way that all $j_1, ..., j_M$
are distinct and appear in the sequence in the increasing order,
last index changing most often. Naturally, the total number of
all such choices equals to the number of ways to pick $M$ distinct
items out of $N$: it is the binomial coefficient $C_M^N$.
\section{Numerical Analysis Utilities}
\label{sec:utilities}
The NPStat package includes a menagerie of numerical analysis utilities
designed, primarily, to support the statistical calculations described
in the previous sections. They are placed in the ``nm'' directory
of the package. A number of these utilities can be used as
stand-alone tools. If the corresponding header file
is not mentioned explicitly in the descriptions below,
it is ``npstat/nm/NNNN.hh'', where NNNN stands
for the actual name of the class or function.
\cname{ConvolutionEngine1D} and \cname{ConvolutionEngineND} --- These
classes encapsulate
the NPStat interface to the FFTW package~\cite{ref:fftw}.
They can be used to performs DFFT convolutions of one-dimensional
and multivariate functions, respectively.
\cname{EquidistantInLinearSpace} (header file ``npstat/nm/EquidistantSequence.hh'')
--- A sequence of equidistant points.
For use with algorithms that take a vector of points as one of
their parameters.
\cname{EquidistantInLogSpace} (header file ``npstat/nm/EquidistantSequence.hh'')
--- A sequence of points whose logarithms are equidistant.
\cname{findRootInLogSpace} --- templated numerical equation solving
for 1-$d$ functions (or for 1-$d$ subspaces of multivariate functions)
using interval division. It is assumed that the solution can be
represented as a product of some object ({\it e.g.,} a vector)
by a positive real number,
and that number is then searched for.
\cname{GaussHermiteQuadrature} and \cname{GaussLegendreQuadrature}
--- templated Gauss-Hermite and Gauss-Legendre quadratures for
one-dimensional functions. Internally, calculations
are performed in long double precision. Of course,
lower precision functions can be integrated as well, with
corresponding reduction in the precision of the result.
\cname{rectangleIntegralCenterAndSize} (header file ``npstat/nm/rectangleQuadrature.hh'')
--- Gauss-Legendre cubatures on
rectangular and hyperrectangular domains using tensor product
integration\footnote{``Tensor product integration'' simply means
that the locations at which the function is evaluated and
corresponding weights are determined by sequential application
of Gauss-Legendre quadratures in each dimension.}.
\cname{goldenSectionSearchInLogSpace} (header file ``npstat/nm/goldenSectionSearch.hh'')
--- templated numerical search
for a minimum of 1-$d$ functions (or for 1-$d$ subspaces of multivariate
functions) using the golden section method.
It is assumed that the minimum can be
represented as a product of some object ({\it e.g.,} a vector)
by a positive constant, and that constant is then searched for.
\cname{GridAxis} --- This class can be used to define an axis of
a rectangular grid with non-uniform spacing of points. The complementary
class \cname{UniformAxis} works more efficiently for representing
equidistant points, while the class \cname{DualAxis} can be
used to represent both uniform and non-uniform grids.
\cname{interpolate\_linear}, \cname{interpolate\_quadratic},
\cname{interpolate\_cubic} (these three functions are declared in the
header file ``npstat/nm/interpolate.hh'')
--- linear, quadratic, and cubic
polynomials with given values at two, three, and four equidistant
points, respectively.
\cname{LinInterpolatedTable1D} --- persistent one-dimensional lookup table
with linear interpolation between the tabulated values. Useful
for representing arbitrary one-dimensional functions in case
the full numerical precision is not required. If the table is
monotonous, the inverse table can be constructed automatically.
\cname{LinInterpolatedTableND} --- persistent multidimensional lookup table
with multilinear interpolation between the tabulated values,
as in Eq~\ref{eq:multilinear}. Extrapolation beyond
the grid boundaries is supported as well. This class is useful
for representing arbitrary functions in case the full numerical precision
is not required. \cname{GridAxis}, \cname{UniformAxis}, or \cname{DualAxis}
class (or user-developed
classes with similar sets of methods) can be used
to define grid point locations.
Note that simple location-based lookup of stored values
(without interpolation)
can be trivially performed with the \cname{closestBin} method of the
\cname{HistoND} class. Lookup of histogram bin values with interpolation
can be performed by the \cname{interpolateHistoND} function
(header file ``npstat/stat/interpolateHistoND.hh'').
\cname{LinearMapper1d}, \cname{LogMapper1d} --- linear and log-linear
transformations in 1-$d$ as functors
(with ``double operator()(const double\& x) const'' method defined).
The \cname{CircularMapper1d} class works similarly to \cname{LinearMapper1d}
in situations with circular data topologies.
\cname{Matrix} --- A templated matrix class. Useful for standard matrix
manipulations, solving linear systems, finding eigenvalues and
eigenvectors of symmetric matrices, {\it etc.} Encapsulates
NPStat interface to LAPACK~\cite{ref:lapack}.
\cname{findPeak3by3}, \cname{findPeak5by5} (header file ``npstat/nm/findPeak2D.hh'') ---
utilities which facilitate peak finding for two-dimensional surfaces
which can be contaminated by small amounts of noise ({\it e.g.}, from
round-off errors). The user can fit a 2-$d$ quadratic polynomial
inside a $3 \times 3$
or $5 \times 5$ window by least squares\footnote{A fast method
utilizing discrete orthogonal polynomial expansion is used internally.}
and check whether that polynomial has
an extremum inside the window.
Initially intended for studying 2-$d$ log-likelihoods using sliding
windows.
\cname{solveQuadratic}, \cname{solveCubic} (header file ``npstat/nm/MathUtils.hh'')
--- solutions of quadratic
and cubic equations, respectively, by numerically sound
methods\footnote{The textbook formula
$x_{1,2} = \frac{-b \pm \sqrt{b^2 - 4 a c}}{2 a}$
for the roots of quadratic
equation $a x^2 + b x + c =0$
is also a~prime example of a numerical analysis pitfall.
A minimal modification,
$x_1 = -\frac{b + (I(b \ge 0) - I(b < 0))\sqrt{b^2 - 4 a c}}{2 a}$,
$x_2 = \frac{c}{a x_1}$, avoids the subtractive
cancellation problem in the $x_{1,2}$ numerator.}.
\cname{ndUnitSphereVolume} (header file ``npstat/nm/MathUtils.hh'')
--- volume of the $n$-dimensional unit sphere.
\cname{polyAndDeriv} (header file ``npstat/nm/MathUtils.hh'')
--- monomial series $\sum_{k=0}^M c_k x^k$
and its derivative with respect to $x$, templated on the type
of coefficients $c_k$.
\cname{polySeriesSum}, \cname{legendreSeriesSum},
\cname{gegenbauerSeriesSum}, \cname{chebyshevSeriesSum} (all of these
functions are declared in the header file ``npstat/nm/MathUtils.hh'')
--- templated
series of one-dimensional monomials, Legendre polynomials, Gegenbauer
polynomials, and Chebyshev polynomials, respectively. Numerically
sound recursive formulae are used to generate the polynomials.
\cname{hermiteSeriesSumProb}, \cname{hermiteSeriesSumPhys} (header
file ``npstat/nm/MathUtils.hh'') --- templated series of
``probabilist'' and ``physicist'' Hermite polynomials, respectively.
\cname{chebyshevSeriesCoeffs} (header
file ``npstat/nm/MathUtils.hh'') --- utility for approximating
mathematical functions with Chebyshev polynomials.
\cname{OrthoPoly1D}, \cname{OrthoPolyND} --- uni- and multivariate
orthogonal polynomials on equidistant rectangular grids with
arbitrary weight functions. In addition to producing
the polynomials themselves, these classes can
be used to calculate polynomial series,
polynomial series expansion coefficients for gridded
functions, and polynomial filters defined by Eq.~\ref{eq:effk}
(but normalized so that the sum of filter coefficients is 1).
The NPStat package also includes implementations of various
special functions needed in statistical calculations (incomplete
gamma function and its inverse, incomplete beta function, {\it etc}).
These functions are declared in the header file
``npstat/nm/SpecialFunctions.hh''.
\newpage
\addcontentsline{toc}{section}{References}
\begin{thebibliography}{99}
\bibitem{ref:minuit}
Minuit2 Minimization Package,
\href{http://www.cern.ch/minuit/}{http://www.cern.ch/minuit/}
\bibitem{ref:geners}
Geners —-- Generic Serialization for C++,
\href{http://geners.hepforge.org/}{http://geners.hepforge.org/}
\bibitem{ref:cpp11} ISO/IEC Standard 14882:2011
{\it Programming Language C++} (2011).
\bibitem{ref:linearbinning}
M.C.~Jones, and H.W.~Lotwick, ``On the Errors Involved in Computing the Empirical Characteristic Function'', {\it Journal of Statistical Computation and Simulation} {\bf 17}, 133 (1983).
\bibitem{ref:sampleskewkurt}
D.N. Joanes and C.A. Gill, ``Comparing measures of sample skewness
and kurtosis'', {\it The Statistician} {\bf 47}, 183 (1998).
\bibitem{ref:johnson}
N.L. Johnson, ``Systems of Frequency Curves Generated by Methods of Translation'', {\it Biometrika} {\bf 36}, 149 (1949).
\bibitem{ref:johnsonbook}
W.P. Elderton and N.L. Johnson, ``Systems of Frequency Curves'',
Cambridge University Press, 1969.
\bibitem{ref:hahnshap}
G.J. Hahn and S.S. Shapiro, ``Statistical Models in Engineering'',
Wiley, 1994.
\bibitem{ref:draper}
J. Draper, ``Properties of Distributions Resulting from Certain Simple Transformations of the Normal Distribution'', {\it Biometrika} {\bf 39}, 290 (1952).
\bibitem{ref:hill}
I.D. Hill, R. Hill, and R. L. Holder,
``Algorithm AS 99: Fitting Johnson Curves by Moments'',
{\it Applied Statistics} {\bf 25}, 180 (1976).
\bibitem{ref:handcock}
M.S. Handcock and M. Morris, ``Relative Distribution Methods'',
{\it Sociological Methodology} {\bf 28}, 53 (1998).
\bibitem{ref:kdetransform}
L. Yang and J.S. Marron, ``Iterated Transformation-Kernel Density Estimation'',
{\it Journal of the American Statistical Association} {\bf 94}, 580 (1999).
\bibitem{ref:thas}
O. Thas, ``Comparing Distributions'', Springer Series in Statistics, 2009.
\bibitem{ref:smoothtest}
J. Neyman, ``'Smooth Test' for Goodness of Fit'', {\it Skandinavisk Aktuarietidskrift} {\bf 20}, 150~(1937).
\bibitem{ref:copulabook}
R.B.~Nelsen, ``An Introduction to Copulas'', 2\supers{nd} Ed.,
Springer Series in Statistics (2006).
\bibitem{ref:eightqueens}
\href{http://en.wikipedia.org/wiki/Eight_queens_puzzle}{``Eight queens puzzle'', Wikipedia.}
\bibitem{ref:eightrooks}
\href{http://en.wikipedia.org/wiki/Rook_polynomial}{``Rook polynomial'', Wikipedia.}
\bibitem{ref:copulaentropy}
J.~Ma and Z.~Sun, ``Mutual information is copula entropy'',
\href{http://arxiv.org/abs/0808.0845}{arXiv:0808.0845v1} (2008).
\bibitem{ref:read}
A.L.~Read, ``Linear interpolation of histograms'',
{\it Nucl. Instr. Meth.} {\bf A 425}, 357 (1999).
\bibitem{ref:silverman} B.W.~Silverman, ``Density Estimation for Statistics
and Data Analysis'', Chapman \& Hall, 1986.
\bibitem{ref:izenmann} A.J.~Izenman, ``Recent Developments in Nonparametric Density Estimation'', {\it Journal of the American Statistical Association} {\bf 86}, 205 (1991).
\bibitem{ref:kde} D.W.~Scott,
``Multivariate Density Estimation: Theory, Practice, and Visualization'',
Wiley, 1992.
\bibitem{ref:bwopt} M.C.~Jones, J.S.~Marron and S.J.~Sheather,
``A Brief Survey of Bandwidth Selection for Density Estimation'',
{\it Journal of the American Statistical Association} {\bf 91}, 401 (1996).
\bibitem{ref:loaderbw} C.R.~Loader,
``Bandwidth Selection: Classical or Plug-in?'',
{\it The Annals of Statistics} {\bf 27}, 415 (1999).
\bibitem{ref:kernsmooth} M.P.~Wand and M.C.~Jones,
``Kernel Smoothing'', Chapman \& Hall, 1995.
\bibitem{ref:chiu2} S.-T. Chiu,
``An Automatic Bandwidth Selector for Kernel Density Estimation'',
{\it Biometrika} {\bf 79}, 771 (1992).
\bibitem{ref:chiumultivar} T.-J. Wu and M.-H. Tsai,
``Root n bandwidths selectors in multivariate kernel density estimation'',
{\it Probab. Theory Relat. Fields} {\bf 129}, 537 (2004).
\bibitem{ref:discretizedkde} M.C.~Jones,
``Discretized and Interpolated Kernel Density Estimates'',
{\it Journal of the American Statistical Association} {\bf 84}, 733 (1989).
\bibitem{ref:fastkde1} C.~Yang, R.~Duraiswami, N.A.~Gumerov and L.~Davis,
``Improved Fast Gauss Transform and Efficient Kernel Density Estimation'',
in {\it Proceedings of the Ninth IEEE International
Conference on Computer Vision}, 2003.
\bibitem{ref:fastkde2} D.~Lee, A.G.~Gray, and A.W. Moore,
``Dual-Tree Fast Gauss Transforms'',
\href{http://arxiv.org/abs/1102.2878}{arXiv:1102.2878v1} (2011).
\bibitem{ref:bwgaussmix} J.S. Marron and M.P. Wand,
``Exact Mean Integrated Squared Error'', {\it The Annals Of Statistics}
{\bf 20}, 712 (1992).
\bibitem{ref:abramson} I.S.~Abramson, ``On Bandwidth Variation in Kernel
Estimates --- A Square Root Law'', {\it The Annals Of Statistics}
{\bf 10}, 1217 (1982).
\bibitem{ref:placida} P.D.A. Dassanayake,
``Local Orthogonal Polynomial Expansion for Density Estimation'',
M.S. Thesis, Texas Tech University, 2012.
\bibitem{ref:betakern} S.X. Chen, ``Beta kernel estimators for
density functions'', {\it Computational Statistics \& Data Analysis}
{\bf 31} 131 (1999).
\bibitem{ref:babu} G.J. Babu, A.J. Canty, and Y.P. Chaubey,
``Application of Bernstein Polynomials for smooth estimation
of a distribution and density function'', {\it Journal of Statistical
Planning and Inference} {\bf 105} 377 (2002).
+\bibitem{ref:sinkhorn} R. Sinkhorn and P. Knopp,
+``Concerning Nonnegative Matrices and Doubly Stochastic Matrices'',
+{\it Pacific Journal of Mathematics} {\bf 21} 343 (1967).
+
+\bibitem{ref:khoury} R. Khoury, ``Closest Matrices in the Space of
+Generalized Doubly Stochastic Matrices'', {\it Journal of
+Mathematical Analysis and Applications} {\bf 222} 562 (1998).
+
\bibitem{ref:lapack}
LAPACK —-- Linear Algebra PACKage,
\href{http://www.netlib.org/lapack/}{http://www.netlib.org/lapack/}
\bibitem{ref:localreg} T.J.~Hastie, ``Non-parametric Logistic Regression'',
SLAC PUB-3160, June 1983.
\bibitem{ref:locfit}
CRAN - Package locfit,
\href{http://cran.r-project.org/web/packages/locfit/}
{http://cran.r-project.org/web/packages/locfit/}
\bibitem{ref:lvm} K.~Levenberg, ``A Method for the Solution
of Certain Non-Linear Problems in Least Squares'',
{\it Quarterly of Applied Mathematics} {\bf 2}, 164 (1944).
D.~Marquardt, ``An Algorithm for Least-Squares Estimation of Nonlinear Parameters'', {\it SIAM Journal on Applied Mathematics} {\bf 11}, 431 (1963).
\bibitem{ref:kdtree} J.L.~Bentley, ``Multidimensional
Binary Search Trees Used for Associative Searching'',
{\it Communications of the ACM} {\bf 18}, 509 (1975).
\bibitem{ref:localquantreg} H.J. Wang and L. Wang,
``Locally Weighted Censored Quantile Regression'',
{\it Journal of the American Statistical Association} {\bf 104}, 487 (2009).
\bibitem{ref:Niederreiter}
H. Niederreiter, ``Random Number Generation and Quasi-Monte
Carlo Methods'', Society for Industrial and Applied Mathematics (SIAM),
Philadelphia (1992).
\bibitem{ref:mercenne} M. Matsumoto and T. Nishimura, ``Mersenne Twister:
A 623-Dimensionally Equidistributed Uniform Pseudo-Random Number Generator'',
{\it ACM Transactions on Modeling and Computer Simulation} {\bf 8}, 3 (1998).
\bibitem{ref:sobol} I. Sobol and Y.L. Levitan,
``The Production of Points Uniformly Distributed in a Multidimensional Cube'',
Tech. Rep. 40, Institute of Applied Mathematics,
USSR Academy of Sciences, 1976 (in Russian).
\bibitem{ref:hosobol} J. Dick, ``Higher order scrambled digital nets achieve the optimal rate of the root mean square error for smooth integrands'',
\href{http://arxiv.org/abs/1007.0842}{arXiv:1007.0842} (2010).
\bibitem{ref:fftw}
FFTW (Fastest Fourier Transform in the West),
\href{http://www.fftw.org/}{http://www.fftw.org/}
\end{thebibliography}
\newpage
\addcontentsline{toc}{section}{Functions and Classes}
\printindex
\end{document}
Index: trunk/npstat/stat/bernsteinOptimalBandwidth.icc
===================================================================
--- trunk/npstat/stat/bernsteinOptimalBandwidth.icc (revision 101)
+++ trunk/npstat/stat/bernsteinOptimalBandwidth.icc (revision 102)
@@ -1,90 +0,0 @@
-#include <cmath>
-#include <cassert>
-#include <stdexcept>
-#include <cfloat>
-
-#include "npstat/nm/LinearMapper1d.hh"
-#include "npstat/nm/definiteIntegrals.hh"
-
-namespace npstat {
- template<typename Real>
- double bernsteinOptimalBandwidth(
- const double npoints, const Real* fvalues, const unsigned long nValues,
- const bool returnB2Star, double* expectedAmise)
- {
- assert(fvalues);
- if (npoints <= 0.0) throw std::invalid_argument(
- "In npstat::amiseOptimalOptimalBandwidth: "
- "number of data points must be positive");
- if (nValues < 3UL) throw std::invalid_argument(
- "In npstat::amiseOptimalOptimalBandwidth: "
- "insufficient number of function scan points");
-
- const double binwidth = 1.0/nValues;
- const double bwsquared = binwidth*binwidth;
- const unsigned long nVMinus1 = nValues - 1UL;
-
- long double firstterm = 0.0L;
- {
- const LinearMapper1d m(0.5*binwidth, fvalues[0],
- 1.5*binwidth, fvalues[1]);
- firstterm += definiteIntegral_1(m.a(), m.b(), 0.0, 0.5*binwidth);
- }
- for (unsigned long i=0; i<nVMinus1; ++i)
- {
- const double xmin = (i + 0.5)*binwidth;
- const double xmax = xmin + binwidth;
- const LinearMapper1d m(xmin, fvalues[i], xmax, fvalues[i+1UL]);
- firstterm += definiteIntegral_1(m.a(), m.b(), xmin, xmax);
- }
- {
- const LinearMapper1d m(1.0 - 1.5*binwidth, fvalues[nValues-2UL],
- 1.0 - 0.5*binwidth, fvalues[nValues-1UL]);
- firstterm += definiteIntegral_1(m.a(), m.b(), 1.0-0.5*binwidth, 1.0);
- }
- firstterm /= (2.0*sqrt(M_PI));
-
- long double secondterm = 0.0L;
- for (unsigned long i=0; i<nValues; ++i)
- {
- const double x = (i + 0.5)*binwidth;
-
- double deri1 = 0.0;
- if (i == 0UL)
- deri1 = (fvalues[1] - fvalues[0])/binwidth;
- else if (i == nVMinus1)
- deri1 = (fvalues[i] - fvalues[i - 1UL])/binwidth;
- else
- deri1 = (fvalues[i + 1UL] - fvalues[i - 1UL])/2.0/binwidth;
-
- unsigned long ic = i;
- if (ic == 0UL)
- ++ic;
- else if (ic == nVMinus1)
- --ic;
- const double deri2 = ((fvalues[ic+1]-fvalues[ic]) +
- (fvalues[ic-1]-fvalues[ic]))/bwsquared;
-
- double tmp = 0.5*x*(1.0 - x)*deri2;
- if (!returnB2Star)
- tmp += (1.0 - 2.0*x)*deri1;
- secondterm += tmp*tmp;
- }
- secondterm *= binwidth;
-
- double bstar = DBL_MAX;
- if (secondterm)
- bstar = pow(firstterm/secondterm/4.0/npoints, 0.4);
-
- if (expectedAmise)
- {
- if (secondterm)
- *expectedAmise = bstar*bstar*secondterm +
- firstterm/npoints/sqrt(bstar);
- else
- *expectedAmise = 0.0;
- }
-
- return bstar;
- }
-}
Index: trunk/npstat/stat/bernsteinOptimalBandwidth.hh
===================================================================
--- trunk/npstat/stat/bernsteinOptimalBandwidth.hh (revision 101)
+++ trunk/npstat/stat/bernsteinOptimalBandwidth.hh (revision 102)
@@ -1,52 +0,0 @@
-#ifndef NPSTAT_BERNSTEINOPTIMALBANDWIDTH_HH_
-#define NPSTAT_BERNSTEINOPTIMALBANDWIDTH_HH_
-
-/*!
-// \file bernsteinOptimalBandwidth.hh
-//
-// \brief Optimal bandwidth for density estimation with Bernstein polynomials
-//
-// The formulae implemented in this code come from the paper by
-// S.X. Chen, "Beta kernel estimators for density functions",
-// Computational Statistics & Data Analysis 31, pp. 131-145 (1999).
-// Note that printed versions of these formulae contain algebraic
-// mistakes. The formulae have instead been rederived starting from
-// equations 4.2 and 4.3 in the paper.
-//
-// Author: I. Volobouev
-//
-// June 2013
-*/
-
-namespace npstat {
- /**
- // AMISE optimal bandwidth for density estimation by beta functions
- // (Bernstein polynomials). The arguments are as follows:
- //
- // npoints -- Number of points in the data sample.
- //
- // fvalues -- Array of scanned values of the reference density.
- // It is assumed that the density is scanned at the
- // bin centers on the [0, 1] interval.
- //
- // nValues -- Number of elements in the array "fvalues".
- //
- // returnB2Star -- If "true", the function will return b_2* from Chen's
- // paper (and corresponding AMISE), otherwise it will
- // return b_1* (using corrected algebra).
- //
- // expectedAmise -- If this argument is provided, it will be filled
- // with the expected AMISE value.
- //
- // The (generalized) Bernstein polynomial degree is simply the inverse
- // of the bandwidth.
- */
- template<typename Real>
- double bernsteinOptimalBandwidth(double npoints, const Real* fvalues,
- unsigned long nValues, bool returnB2Star,
- double* expectedAmise = 0);
-}
-
-#include "npstat/stat/bernsteinOptimalBandwidth.icc"
-
-#endif // NPSTAT_BERNSTEINOPTIMALBANDWIDTH_HH_
Index: trunk/npstat/stat/Makefile.in
===================================================================
--- trunk/npstat/stat/Makefile.in (revision 101)
+++ trunk/npstat/stat/Makefile.in (revision 102)
@@ -1,883 +1,883 @@
# Makefile.in generated by automake 1.12.2 from Makefile.am.
# @configure_input@
# Copyright (C) 1994-2012 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
VPATH = @srcdir@
am__make_dryrun = \
{ \
am__dry=no; \
case $$MAKEFLAGS in \
*\\[\ \ ]*) \
echo 'am--echo: ; @echo "AM" OK' | $(MAKE) -f - 2>/dev/null \
| grep '^AM OK$$' >/dev/null || am__dry=yes;; \
*) \
for am__flg in $$MAKEFLAGS; do \
case $$am__flg in \
*=*|--*) ;; \
*n*) am__dry=yes; break;; \
esac; \
done;; \
esac; \
test $$am__dry = yes; \
}
pkgdatadir = $(datadir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkglibexecdir = $(libexecdir)/@PACKAGE@
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
build_triplet = @build@
host_triplet = @host@
subdir = npstat/stat
DIST_COMMON = $(include_HEADERS) $(srcdir)/Makefile.am \
$(srcdir)/Makefile.in $(top_srcdir)/depcomp
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/m4/libtool.m4 \
$(top_srcdir)/m4/ltoptions.m4 $(top_srcdir)/m4/ltsugar.m4 \
$(top_srcdir)/m4/ltversion.m4 $(top_srcdir)/m4/lt~obsolete.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(install_sh) -d
CONFIG_CLEAN_FILES =
CONFIG_CLEAN_VPATH_FILES =
LTLIBRARIES = $(noinst_LTLIBRARIES)
libstat_la_LIBADD =
am_libstat_la_OBJECTS = AbsCopulaSmoother.lo AbsDistribution1D.lo \
AbsDistributionND.lo amiseOptimalBandwidth.lo \
CompositeDistribution1D.lo CompositeDistributionND.lo \
CopulaInterpolationND.lo Copulas.lo Distribution1DFactory.lo \
Distribution1DReader.lo DistributionNDReader.lo \
Distributions1D.lo DistributionsND.lo \
CrossCovarianceAccumulator.lo fitSbParameters.lo \
StatAccumulatorArr.lo HistoAxis.lo \
InterpolatedDistribution1D.lo JohnsonCurves.lo \
JohnsonKDESmoother.lo LocalPolyFilter1D.lo \
logLikelihoodPeak.lo PolyFilterCollection1D.lo \
SbMomentsBigGamma.lo SbMomentsCalculator.lo ScalableGaussND.lo \
SequentialCopulaSmoother.lo SequentialPolyFilterND.lo \
StatAccumulator.lo UnitMapInterpolationND.lo \
WeightedStatAccumulator.lo AbsNtuple.lo \
QuadraticOrthoPolyND.lo NMCombinationSequencer.lo \
Filter1DBuilders.lo StatAccumulatorPair.lo GridRandomizer.lo \
ConstantBandwidthSmoother1D.lo GaussianMixture1D.lo \
HistoNDCdf.lo scanDensityAsWeight.lo NUHistoAxis.lo \
distributionReadError.lo WeightedStatAccumulatorPair.lo \
convertAxis.lo ProductSymmetricBetaNDCdf.lo DualHistoAxis.lo \
BinSummary.lo StorableMultivariateFunctor.lo \
StorableMultivariateFunctorReader.lo \
TruncatedDistribution1D.lo neymanPearsonWindow1D.lo \
QuantileTable1D.lo LOrPEMarginalSmoother.lo \
LeftCensoredDistribution.lo RightCensoredDistribution.lo \
AbsDiscreteDistribution1D.lo DiscreteDistribution1DReader.lo \
DiscreteDistributions1D.lo BernsteinFilter1DBuilder.lo \
BetaFilter1DBuilder.lo
libstat_la_OBJECTS = $(am_libstat_la_OBJECTS)
DEFAULT_INCLUDES = -I.@am__isrc@
depcomp = $(SHELL) $(top_srcdir)/depcomp
am__depfiles_maybe = depfiles
am__mv = mv -f
CXXCOMPILE = $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) \
$(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS)
LTCXXCOMPILE = $(LIBTOOL) --tag=CXX $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) \
--mode=compile $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) \
$(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS)
CXXLD = $(CXX)
CXXLINK = $(LIBTOOL) --tag=CXX $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) \
--mode=link $(CXXLD) $(AM_CXXFLAGS) $(CXXFLAGS) $(AM_LDFLAGS) \
$(LDFLAGS) -o $@
SOURCES = $(libstat_la_SOURCES)
DIST_SOURCES = $(libstat_la_SOURCES)
am__can_run_installinfo = \
case $$AM_UPDATE_INFO_DIR in \
n|no|NO) false;; \
*) (install-info --version) >/dev/null 2>&1;; \
esac
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`;
am__install_max = 40
am__nobase_strip_setup = \
srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'`
am__nobase_strip = \
for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||"
am__nobase_list = $(am__nobase_strip_setup); \
for p in $$list; do echo "$$p $$p"; done | \
sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \
$(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \
if (++n[$$2] == $(am__install_max)) \
{ print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \
END { for (dir in files) print dir, files[dir] }'
am__base_list = \
sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \
sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g'
am__uninstall_files_from_dir = { \
test -z "$$files" \
|| { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \
|| { echo " ( cd '$$dir' && rm -f" $$files ")"; \
$(am__cd) "$$dir" && rm -f $$files; }; \
}
am__installdirs = "$(DESTDIR)$(includedir)"
HEADERS = $(include_HEADERS)
ETAGS = etags
CTAGS = ctags
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMTAR = @AMTAR@
AR = @AR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CXX = @CXX@
CXXCPP = @CXXCPP@
CXXDEPMODE = @CXXDEPMODE@
CXXFLAGS = @CXXFLAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
DEPS_CFLAGS = @DEPS_CFLAGS@
DEPS_LIBS = @DEPS_LIBS@
DLLTOOL = @DLLTOOL@
DSYMUTIL = @DSYMUTIL@
DUMPBIN = @DUMPBIN@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
F77 = @F77@
FFLAGS = @FFLAGS@
FGREP = @FGREP@
FLIBS = @FLIBS@
GREP = @GREP@
INSTALL = @INSTALL@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
LD = @LD@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LIBTOOL = @LIBTOOL@
LIPO = @LIPO@
LN_S = @LN_S@
LTLIBOBJS = @LTLIBOBJS@
MAKEINFO = @MAKEINFO@
MANIFEST_TOOL = @MANIFEST_TOOL@
MKDIR_P = @MKDIR_P@
NM = @NM@
NMEDIT = @NMEDIT@
OBJDUMP = @OBJDUMP@
OBJEXT = @OBJEXT@
OTOOL = @OTOOL@
OTOOL64 = @OTOOL64@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_URL = @PACKAGE_URL@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
PKG_CONFIG = @PKG_CONFIG@
PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@
PKG_CONFIG_PATH = @PKG_CONFIG_PATH@
RANLIB = @RANLIB@
SED = @SED@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
VERSION = @VERSION@
abs_builddir = @abs_builddir@
abs_srcdir = @abs_srcdir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
ac_ct_AR = @ac_ct_AR@
ac_ct_CC = @ac_ct_CC@
ac_ct_CXX = @ac_ct_CXX@
ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
ac_ct_F77 = @ac_ct_F77@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build = @build@
build_alias = @build_alias@
build_cpu = @build_cpu@
build_os = @build_os@
build_vendor = @build_vendor@
builddir = @builddir@
datadir = @datadir@
datarootdir = @datarootdir@
docdir = @docdir@
dvidir = @dvidir@
exec_prefix = @exec_prefix@
host = @host@
host_alias = @host_alias@
host_cpu = @host_cpu@
host_os = @host_os@
host_vendor = @host_vendor@
htmldir = @htmldir@
includedir = ${prefix}/include/npstat/stat
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localedir = @localedir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
pdfdir = @pdfdir@
prefix = @prefix@
program_transform_name = @program_transform_name@
psdir = @psdir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
srcdir = @srcdir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
top_build_prefix = @top_build_prefix@
top_builddir = @top_builddir@
top_srcdir = @top_srcdir@
AM_CPPFLAGS = $(DEPS_CFLAGS)
noinst_LTLIBRARIES = libstat.la
libstat_la_SOURCES = AbsCopulaSmoother.cc AbsDistribution1D.cc \
AbsDistributionND.cc amiseOptimalBandwidth.cc CompositeDistribution1D.cc \
CompositeDistributionND.cc CopulaInterpolationND.cc Copulas.cc \
Distribution1DFactory.cc Distribution1DReader.cc DistributionNDReader.cc \
Distributions1D.cc DistributionsND.cc CrossCovarianceAccumulator.cc \
fitSbParameters.cc StatAccumulatorArr.cc HistoAxis.cc \
InterpolatedDistribution1D.cc JohnsonCurves.cc JohnsonKDESmoother.cc \
LocalPolyFilter1D.cc logLikelihoodPeak.cc PolyFilterCollection1D.cc \
SbMomentsBigGamma.cc SbMomentsCalculator.cc ScalableGaussND.cc \
SequentialCopulaSmoother.cc SequentialPolyFilterND.cc StatAccumulator.cc \
UnitMapInterpolationND.cc WeightedStatAccumulator.cc AbsNtuple.cc \
QuadraticOrthoPolyND.cc NMCombinationSequencer.cc Filter1DBuilders.cc \
StatAccumulatorPair.cc GridRandomizer.cc ConstantBandwidthSmoother1D.cc \
GaussianMixture1D.cc HistoNDCdf.cc scanDensityAsWeight.cc NUHistoAxis.cc \
distributionReadError.cc WeightedStatAccumulatorPair.cc convertAxis.cc \
ProductSymmetricBetaNDCdf.cc DualHistoAxis.cc BinSummary.cc \
StorableMultivariateFunctor.cc StorableMultivariateFunctorReader.cc \
TruncatedDistribution1D.cc neymanPearsonWindow1D.cc QuantileTable1D.cc \
LOrPEMarginalSmoother.cc LeftCensoredDistribution.cc \
RightCensoredDistribution.cc AbsDiscreteDistribution1D.cc \
DiscreteDistribution1DReader.cc DiscreteDistributions1D.cc \
BernsteinFilter1DBuilder.cc BetaFilter1DBuilder.cc
INCLUDES = -I@top_srcdir@/
include_HEADERS = AbsBandwidthCV.hh \
AbsCompositeDistroBuilder.hh \
AbsCompositeDistroBuilder.icc \
AbsCopulaSmoother.hh \
AbsCopulaSmoother.icc \
AbsDiscreteDistribution1D.hh \
AbsDistribution1D.hh \
AbsDistributionND.hh \
AbsDistributionND.icc \
AbsFilter1DBuilder.hh \
AbsGridInterpolatedDistribution.hh \
AbsLossCalculator.hh \
AbsMarginalSmoother.hh \
AbsMarginalSmoother.icc \
AbsNtuple.hh \
AbsNtuple.icc \
AbsPolyFilter1D.hh \
AbsPolyFilterND.hh \
amiseOptimalBandwidth.hh \
amiseOptimalBandwidth.icc \
ArchivedNtuple.hh \
ArchivedNtuple.icc \
ArrayProjectors.hh \
ArrayProjectors.icc \
arrayStats.hh \
arrayStats.icc \
BandwidthCVLeastSquares1D.hh \
BandwidthCVLeastSquares1D.icc \
BandwidthCVLeastSquaresND.hh \
BandwidthCVLeastSquaresND.icc \
BandwidthCVPseudoLogli1D.hh \
BandwidthCVPseudoLogli1D.icc \
BandwidthCVPseudoLogliND.hh \
BandwidthCVPseudoLogliND.icc \
BernsteinFilter1DBuilder.hh \
- bernsteinOptimalBandwidth.hh \
- bernsteinOptimalBandwidth.icc \
+ betaKernelsBandwidth.hh \
+ betaKernelsBandwidth.icc \
BetaFilter1DBuilder.hh \
BinSummary.hh \
BinSummary.icc \
CensoredQuantileRegression.hh \
CensoredQuantileRegression.icc \
Column.hh \
Column.icc \
CompositeDistribution1D.hh \
CompositeDistributionND.hh \
CompositeDistributionND.icc \
CompositeDistros1D.hh \
ConstantBandwidthSmoother1D.hh \
ConstantBandwidthSmootherND.hh \
ConstantBandwidthSmootherND.icc \
convertAxis.hh \
CopulaInterpolationND.hh \
Copulas.hh \
CrossCovarianceAccumulator.hh \
CrossCovarianceAccumulator.icc \
CVCopulaSmoother.hh \
CVCopulaSmoother.icc \
DensityScanND.hh \
DiscreteDistribution1DReader.hh \
DiscreteDistributions1D.hh \
Distribution1DFactory.hh \
Distribution1DReader.hh \
DistributionNDReader.hh \
Distributions1D.hh \
Distributions1D.icc \
DistributionsND.hh \
DistributionsND.icc \
distributionReadError.hh \
DualHistoAxis.hh \
empiricalCopula.hh \
empiricalCopulaHisto.hh \
empiricalCopulaHisto.icc \
empiricalCopula.icc \
Filter1DBuilders.hh \
FitUtils.hh \
FitUtils.icc \
GaussianMixture1D.hh \
griddedRobustRegression.hh \
griddedRobustRegression.icc \
GriddedRobustRegressionStop.hh \
GridInterpolatedDistribution.hh \
GridInterpolatedDistribution.icc \
GridRandomizer.hh \
HistoAxis.hh \
HistoND.hh \
HistoND.icc \
HistoNDCdf.hh \
HistoNDFunctorInstances.hh \
histoStats.hh \
histoStats.icc \
InMemoryNtuple.hh \
InMemoryNtuple.icc \
InterpolatedDistribution1D.hh \
interpolateHistoND.hh \
interpolateHistoND.icc \
InterpolationFunctorInstances.hh \
JohnsonCurves.hh \
JohnsonKDESmoother.hh \
KDECopulaSmoother.hh \
KDECopulaSmoother.icc \
KDEFilterND.hh \
KDEFilterND.icc \
kendallsTau.hh \
kendallsTau.icc \
LeftCensoredDistribution.hh \
LocalLogisticRegression.hh \
LocalLogisticRegression.icc \
LocalPolyFilter1D.hh \
LocalPolyFilter1D.icc \
LocalPolyFilterND.hh \
LocalPolyFilterND.icc \
LocalQuadraticLeastSquaresND.hh \
LocalQuadraticLeastSquaresND.icc \
LocalQuantileRegression.hh \
LocalQuantileRegression.icc \
logLikelihoodPeak.hh \
LOrPECopulaSmoother.hh \
LOrPECopulaSmoother.icc \
LOrPEMarginalSmoother.hh \
lorpeSmooth1D.hh \
lorpeSmooth1D.icc \
mergeTwoHistos.hh \
mergeTwoHistos.icc \
mirrorWeight.hh \
MultivariateSumAccumulator.hh \
MultivariateSumsqAccumulator.hh \
MultivariateSumsqAccumulator.icc \
MultivariateWeightedSumAccumulator.hh \
MultivariateWeightedSumsqAccumulator.hh \
MultivariateWeightedSumsqAccumulator.icc \
neymanPearsonWindow1D.hh \
NMCombinationSequencer.hh \
NonparametricCompositeBuilder.hh \
NonparametricCompositeBuilder.icc \
NtHistoFill.hh \
NtNtupleFill.hh \
NtRectangularCut.hh \
NtRectangularCut.icc \
NtupleBuffer.hh \
NtupleBuffer.icc \
NtupleRecordTypes.hh \
NtupleRecordTypesFwd.hh \
NtupleReference.hh \
NUHistoAxis.hh \
OrderedPointND.hh \
OrderedPointND.icc \
PolyFilterCollection1D.hh \
PolyFilterCollection1D.icc \
ProductSymmetricBetaNDCdf.hh \
QuadraticOrthoPolyND.hh \
QuadraticOrthoPolyND.icc \
QuantileRegression1D.hh \
QuantileRegression1D.icc \
QuantileTable1D.hh \
RightCensoredDistribution.hh \
SampleAccumulator.hh \
SampleAccumulator.icc \
SbMomentsCalculator.hh \
ScalableGaussND.hh \
scanDensityAsWeight.hh \
SequentialCopulaSmoother.hh \
SequentialPolyFilterND.hh \
SequentialPolyFilterND.icc \
spearmansRho.hh \
spearmansRho.icc \
StatAccumulator.hh \
StatAccumulatorArr.hh \
StatAccumulatorPair.hh \
StatUtils.hh \
StatUtils.icc \
StorableHistoNDFunctor.hh \
StorableHistoNDFunctor.icc \
StorableInterpolationFunctor.hh \
StorableInterpolationFunctor.icc \
StorableMultivariateFunctor.hh \
StorableMultivariateFunctorReader.hh \
TruncatedDistribution1D.hh \
TwoPointsLTSLoss.hh \
TwoPointsLTSLoss.icc \
UnitMapInterpolationND.hh \
variableBandwidthSmooth1D.hh \
variableBandwidthSmooth1D.icc \
WeightedLTSLoss.hh \
WeightedLTSLoss.icc \
WeightedSampleAccumulator.hh \
WeightedSampleAccumulator.icc \
WeightedStatAccumulator.hh \
WeightedStatAccumulatorPair.hh
EXTRA_DIST = 00README.txt npstat_doxy.hh
all: all-am
.SUFFIXES:
.SUFFIXES: .cc .lo .o .obj
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \
&& { if test -f $@; then exit 0; else break; fi; }; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign npstat/stat/Makefile'; \
$(am__cd) $(top_srcdir) && \
$(AUTOMAKE) --foreign npstat/stat/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(am__aclocal_m4_deps):
clean-noinstLTLIBRARIES:
-test -z "$(noinst_LTLIBRARIES)" || rm -f $(noinst_LTLIBRARIES)
@list='$(noinst_LTLIBRARIES)'; \
locs=`for p in $$list; do echo $$p; done | \
sed 's|^[^/]*$$|.|; s|/[^/]*$$||; s|$$|/so_locations|' | \
sort -u`; \
test -z "$$locs" || { \
echo rm -f $${locs}; \
rm -f $${locs}; \
}
libstat.la: $(libstat_la_OBJECTS) $(libstat_la_DEPENDENCIES) $(EXTRA_libstat_la_DEPENDENCIES)
$(CXXLINK) $(libstat_la_OBJECTS) $(libstat_la_LIBADD) $(LIBS)
mostlyclean-compile:
-rm -f *.$(OBJEXT)
distclean-compile:
-rm -f *.tab.c
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AbsCopulaSmoother.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AbsDiscreteDistribution1D.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AbsDistribution1D.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AbsDistributionND.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AbsNtuple.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/BernsteinFilter1DBuilder.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/BetaFilter1DBuilder.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/BinSummary.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/CompositeDistribution1D.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/CompositeDistributionND.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ConstantBandwidthSmoother1D.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/CopulaInterpolationND.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Copulas.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/CrossCovarianceAccumulator.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/DiscreteDistribution1DReader.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/DiscreteDistributions1D.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Distribution1DFactory.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Distribution1DReader.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/DistributionNDReader.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Distributions1D.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/DistributionsND.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/DualHistoAxis.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Filter1DBuilders.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/GaussianMixture1D.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/GridRandomizer.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/HistoAxis.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/HistoNDCdf.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/InterpolatedDistribution1D.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/JohnsonCurves.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/JohnsonKDESmoother.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/LOrPEMarginalSmoother.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/LeftCensoredDistribution.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/LocalPolyFilter1D.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/NMCombinationSequencer.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/NUHistoAxis.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/PolyFilterCollection1D.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ProductSymmetricBetaNDCdf.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/QuadraticOrthoPolyND.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/QuantileTable1D.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/RightCensoredDistribution.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/SbMomentsBigGamma.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/SbMomentsCalculator.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ScalableGaussND.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/SequentialCopulaSmoother.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/SequentialPolyFilterND.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/StatAccumulator.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/StatAccumulatorArr.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/StatAccumulatorPair.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/StorableMultivariateFunctor.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/StorableMultivariateFunctorReader.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/TruncatedDistribution1D.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/UnitMapInterpolationND.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/WeightedStatAccumulator.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/WeightedStatAccumulatorPair.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/amiseOptimalBandwidth.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/convertAxis.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/distributionReadError.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/fitSbParameters.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/logLikelihoodPeak.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/neymanPearsonWindow1D.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/scanDensityAsWeight.Plo@am__quote@
.cc.o:
@am__fastdepCXX_TRUE@ $(CXXCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $<
@am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po
@AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCXX_FALSE@ $(CXXCOMPILE) -c -o $@ $<
.cc.obj:
@am__fastdepCXX_TRUE@ $(CXXCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'`
@am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po
@AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCXX_FALSE@ $(CXXCOMPILE) -c -o $@ `$(CYGPATH_W) '$<'`
.cc.lo:
@am__fastdepCXX_TRUE@ $(LTCXXCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $<
@am__fastdepCXX_TRUE@ $(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Plo
@AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCXX_FALSE@ $(LTCXXCOMPILE) -c -o $@ $<
mostlyclean-libtool:
-rm -f *.lo
clean-libtool:
-rm -rf .libs _libs
install-includeHEADERS: $(include_HEADERS)
@$(NORMAL_INSTALL)
@list='$(include_HEADERS)'; test -n "$(includedir)" || list=; \
if test -n "$$list"; then \
echo " $(MKDIR_P) '$(DESTDIR)$(includedir)'"; \
$(MKDIR_P) "$(DESTDIR)$(includedir)" || exit 1; \
fi; \
for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
echo "$$d$$p"; \
done | $(am__base_list) | \
while read files; do \
echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(includedir)'"; \
$(INSTALL_HEADER) $$files "$(DESTDIR)$(includedir)" || exit $$?; \
done
uninstall-includeHEADERS:
@$(NORMAL_UNINSTALL)
@list='$(include_HEADERS)'; test -n "$(includedir)" || list=; \
files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \
dir='$(DESTDIR)$(includedir)'; $(am__uninstall_files_from_dir)
ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) '{ files[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in files) print i; }; }'`; \
mkid -fID $$unique
tags: TAGS
TAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
set x; \
here=`pwd`; \
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) '{ files[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in files) print i; }; }'`; \
shift; \
if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
if test $$# -gt 0; then \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
"$$@" $$unique; \
else \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$unique; \
fi; \
fi
ctags: CTAGS
CTAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) '{ files[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in files) print i; }; }'`; \
test -z "$(CTAGS_ARGS)$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& $(am__cd) $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) "$$here"
cscopelist: $(HEADERS) $(SOURCES) $(LISP)
list='$(SOURCES) $(HEADERS) $(LISP)'; \
case "$(srcdir)" in \
[\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \
*) sdir=$(subdir)/$(srcdir) ;; \
esac; \
for i in $$list; do \
if test -f "$$i"; then \
echo "$(subdir)/$$i"; \
else \
echo "$$sdir/$$i"; \
fi; \
done >> $(top_builddir)/cscope.files
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d "$(distdir)/$$file"; then \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
else \
test -f "$(distdir)/$$file" \
|| cp -p $$d/$$file "$(distdir)/$$file" \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-am
all-am: Makefile $(LTLIBRARIES) $(HEADERS)
installdirs:
for dir in "$(DESTDIR)$(includedir)"; do \
test -z "$$dir" || $(MKDIR_P) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
if test -z '$(STRIP)'; then \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
install; \
else \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
"INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
fi
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-generic clean-libtool clean-noinstLTLIBRARIES \
mostlyclean-am
distclean: distclean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
distclean-am: clean-am distclean-compile distclean-generic \
distclean-tags
dvi: dvi-am
dvi-am:
html: html-am
html-am:
info: info-am
info-am:
install-data-am: install-includeHEADERS
install-dvi: install-dvi-am
install-dvi-am:
install-exec-am:
install-html: install-html-am
install-html-am:
install-info: install-info-am
install-info-am:
install-man:
install-pdf: install-pdf-am
install-pdf-am:
install-ps: install-ps-am
install-ps-am:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-compile mostlyclean-generic \
mostlyclean-libtool
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am: uninstall-includeHEADERS
.MAKE: install-am install-strip
.PHONY: CTAGS GTAGS all all-am check check-am clean clean-generic \
clean-libtool clean-noinstLTLIBRARIES cscopelist ctags \
distclean distclean-compile distclean-generic \
distclean-libtool distclean-tags distdir dvi dvi-am html \
html-am info info-am install install-am install-data \
install-data-am install-dvi install-dvi-am install-exec \
install-exec-am install-html install-html-am \
install-includeHEADERS install-info install-info-am \
install-man install-pdf install-pdf-am install-ps \
install-ps-am install-strip installcheck installcheck-am \
installdirs maintainer-clean maintainer-clean-generic \
mostlyclean mostlyclean-compile mostlyclean-generic \
mostlyclean-libtool pdf pdf-am ps ps-am tags uninstall \
uninstall-am uninstall-includeHEADERS
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:
Index: trunk/npstat/stat/Makefile
===================================================================
--- trunk/npstat/stat/Makefile (revision 101)
+++ trunk/npstat/stat/Makefile (revision 102)
@@ -1,883 +1,883 @@
# Makefile.in generated by automake 1.12.2 from Makefile.am.
# npstat/stat/Makefile. Generated from Makefile.in by configure.
# Copyright (C) 1994-2012 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
am__make_dryrun = \
{ \
am__dry=no; \
case $$MAKEFLAGS in \
*\\[\ \ ]*) \
echo 'am--echo: ; @echo "AM" OK' | $(MAKE) -f - 2>/dev/null \
| grep '^AM OK$$' >/dev/null || am__dry=yes;; \
*) \
for am__flg in $$MAKEFLAGS; do \
case $$am__flg in \
*=*|--*) ;; \
*n*) am__dry=yes; break;; \
esac; \
done;; \
esac; \
test $$am__dry = yes; \
}
pkgdatadir = $(datadir)/npstat
pkgincludedir = $(includedir)/npstat
pkglibdir = $(libdir)/npstat
pkglibexecdir = $(libexecdir)/npstat
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
build_triplet = x86_64-unknown-linux-gnu
host_triplet = x86_64-unknown-linux-gnu
subdir = npstat/stat
DIST_COMMON = $(include_HEADERS) $(srcdir)/Makefile.am \
$(srcdir)/Makefile.in $(top_srcdir)/depcomp
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/m4/libtool.m4 \
$(top_srcdir)/m4/ltoptions.m4 $(top_srcdir)/m4/ltsugar.m4 \
$(top_srcdir)/m4/ltversion.m4 $(top_srcdir)/m4/lt~obsolete.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(install_sh) -d
CONFIG_CLEAN_FILES =
CONFIG_CLEAN_VPATH_FILES =
LTLIBRARIES = $(noinst_LTLIBRARIES)
libstat_la_LIBADD =
am_libstat_la_OBJECTS = AbsCopulaSmoother.lo AbsDistribution1D.lo \
AbsDistributionND.lo amiseOptimalBandwidth.lo \
CompositeDistribution1D.lo CompositeDistributionND.lo \
CopulaInterpolationND.lo Copulas.lo Distribution1DFactory.lo \
Distribution1DReader.lo DistributionNDReader.lo \
Distributions1D.lo DistributionsND.lo \
CrossCovarianceAccumulator.lo fitSbParameters.lo \
StatAccumulatorArr.lo HistoAxis.lo \
InterpolatedDistribution1D.lo JohnsonCurves.lo \
JohnsonKDESmoother.lo LocalPolyFilter1D.lo \
logLikelihoodPeak.lo PolyFilterCollection1D.lo \
SbMomentsBigGamma.lo SbMomentsCalculator.lo ScalableGaussND.lo \
SequentialCopulaSmoother.lo SequentialPolyFilterND.lo \
StatAccumulator.lo UnitMapInterpolationND.lo \
WeightedStatAccumulator.lo AbsNtuple.lo \
QuadraticOrthoPolyND.lo NMCombinationSequencer.lo \
Filter1DBuilders.lo StatAccumulatorPair.lo GridRandomizer.lo \
ConstantBandwidthSmoother1D.lo GaussianMixture1D.lo \
HistoNDCdf.lo scanDensityAsWeight.lo NUHistoAxis.lo \
distributionReadError.lo WeightedStatAccumulatorPair.lo \
convertAxis.lo ProductSymmetricBetaNDCdf.lo DualHistoAxis.lo \
BinSummary.lo StorableMultivariateFunctor.lo \
StorableMultivariateFunctorReader.lo \
TruncatedDistribution1D.lo neymanPearsonWindow1D.lo \
QuantileTable1D.lo LOrPEMarginalSmoother.lo \
LeftCensoredDistribution.lo RightCensoredDistribution.lo \
AbsDiscreteDistribution1D.lo DiscreteDistribution1DReader.lo \
DiscreteDistributions1D.lo BernsteinFilter1DBuilder.lo \
BetaFilter1DBuilder.lo
libstat_la_OBJECTS = $(am_libstat_la_OBJECTS)
DEFAULT_INCLUDES = -I.
depcomp = $(SHELL) $(top_srcdir)/depcomp
am__depfiles_maybe = depfiles
am__mv = mv -f
CXXCOMPILE = $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) \
$(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS)
LTCXXCOMPILE = $(LIBTOOL) --tag=CXX $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) \
--mode=compile $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) \
$(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS)
CXXLD = $(CXX)
CXXLINK = $(LIBTOOL) --tag=CXX $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) \
--mode=link $(CXXLD) $(AM_CXXFLAGS) $(CXXFLAGS) $(AM_LDFLAGS) \
$(LDFLAGS) -o $@
SOURCES = $(libstat_la_SOURCES)
DIST_SOURCES = $(libstat_la_SOURCES)
am__can_run_installinfo = \
case $$AM_UPDATE_INFO_DIR in \
n|no|NO) false;; \
*) (install-info --version) >/dev/null 2>&1;; \
esac
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`;
am__install_max = 40
am__nobase_strip_setup = \
srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'`
am__nobase_strip = \
for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||"
am__nobase_list = $(am__nobase_strip_setup); \
for p in $$list; do echo "$$p $$p"; done | \
sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \
$(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \
if (++n[$$2] == $(am__install_max)) \
{ print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \
END { for (dir in files) print dir, files[dir] }'
am__base_list = \
sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \
sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g'
am__uninstall_files_from_dir = { \
test -z "$$files" \
|| { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \
|| { echo " ( cd '$$dir' && rm -f" $$files ")"; \
$(am__cd) "$$dir" && rm -f $$files; }; \
}
am__installdirs = "$(DESTDIR)$(includedir)"
HEADERS = $(include_HEADERS)
ETAGS = etags
CTAGS = ctags
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = ${SHELL} /home/igv/Hepforge/npstat/trunk/missing --run aclocal-1.12
AMTAR = $${TAR-tar}
AR = ar
AUTOCONF = ${SHELL} /home/igv/Hepforge/npstat/trunk/missing --run autoconf
AUTOHEADER = ${SHELL} /home/igv/Hepforge/npstat/trunk/missing --run autoheader
AUTOMAKE = ${SHELL} /home/igv/Hepforge/npstat/trunk/missing --run automake-1.12
AWK = gawk
CC = gcc
CCDEPMODE = depmode=gcc3
CFLAGS = -g -O2
CPP = gcc -E
CPPFLAGS =
CXX = g++
CXXCPP = g++ -E
CXXDEPMODE = depmode=gcc3
CXXFLAGS = -std=c++0x -g -O0 -Wall -W -Werror
CYGPATH_W = echo
DEFS = -DPACKAGE_NAME=\"npstat\" -DPACKAGE_TARNAME=\"npstat\" -DPACKAGE_VERSION=\"2.2.0\" -DPACKAGE_STRING=\"npstat\ 2.2.0\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\" -DPACKAGE=\"npstat\" -DVERSION=\"2.2.0\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\"
DEPDIR = .deps
DEPS_CFLAGS = -I/usr/local/include
DEPS_LIBS = -L/usr/local/lib -lfftw3 -lm -lgeners
DLLTOOL = false
DSYMUTIL =
DUMPBIN =
ECHO_C =
ECHO_N = -n
ECHO_T =
EGREP = /bin/grep -E
EXEEXT =
F77 = g77
FFLAGS = -g -O2
FGREP = /bin/grep -F
FLIBS = -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../.. -lgfortran -lm -lquadmath
GREP = /bin/grep
INSTALL = /bin/install -c
INSTALL_DATA = ${INSTALL} -m 644
INSTALL_PROGRAM = ${INSTALL}
INSTALL_SCRIPT = ${INSTALL}
INSTALL_STRIP_PROGRAM = $(install_sh) -c -s
LD = /bin/ld -m elf_x86_64
LDFLAGS =
LIBOBJS =
LIBS =
LIBTOOL = $(SHELL) $(top_builddir)/libtool
LIPO =
LN_S = ln -s
LTLIBOBJS =
MAKEINFO = ${SHELL} /home/igv/Hepforge/npstat/trunk/missing --run makeinfo
MANIFEST_TOOL = :
MKDIR_P = /bin/mkdir -p
NM = /bin/nm -B
NMEDIT =
OBJDUMP = objdump
OBJEXT = o
OTOOL =
OTOOL64 =
PACKAGE = npstat
PACKAGE_BUGREPORT =
PACKAGE_NAME = npstat
PACKAGE_STRING = npstat 2.2.0
PACKAGE_TARNAME = npstat
PACKAGE_URL =
PACKAGE_VERSION = 2.2.0
PATH_SEPARATOR = :
PKG_CONFIG = /bin/pkg-config
PKG_CONFIG_LIBDIR =
PKG_CONFIG_PATH = /usr/local/lib/pkgconfig
RANLIB = ranlib
SED = /bin/sed
SET_MAKE =
SHELL = /bin/sh
STRIP = strip
VERSION = 2.2.0
abs_builddir = /home/igv/Hepforge/npstat/trunk/npstat/stat
abs_srcdir = /home/igv/Hepforge/npstat/trunk/npstat/stat
abs_top_builddir = /home/igv/Hepforge/npstat/trunk
abs_top_srcdir = /home/igv/Hepforge/npstat/trunk
ac_ct_AR = ar
ac_ct_CC = gcc
ac_ct_CXX = g++
ac_ct_DUMPBIN =
ac_ct_F77 = g77
am__include = include
am__leading_dot = .
am__quote =
am__tar = $${TAR-tar} chof - "$$tardir"
am__untar = $${TAR-tar} xf -
bindir = ${exec_prefix}/bin
build = x86_64-unknown-linux-gnu
build_alias =
build_cpu = x86_64
build_os = linux-gnu
build_vendor = unknown
builddir = .
datadir = ${datarootdir}
datarootdir = ${prefix}/share
docdir = ${datarootdir}/doc/${PACKAGE_TARNAME}
dvidir = ${docdir}
exec_prefix = ${prefix}
host = x86_64-unknown-linux-gnu
host_alias =
host_cpu = x86_64
host_os = linux-gnu
host_vendor = unknown
htmldir = ${docdir}
includedir = ${prefix}/include/npstat/stat
infodir = ${datarootdir}/info
install_sh = ${SHELL} /home/igv/Hepforge/npstat/trunk/install-sh
libdir = ${exec_prefix}/lib
libexecdir = ${exec_prefix}/libexec
localedir = ${datarootdir}/locale
localstatedir = ${prefix}/var
mandir = ${datarootdir}/man
mkdir_p = $(MKDIR_P)
oldincludedir = /usr/include
pdfdir = ${docdir}
prefix = /usr/local
program_transform_name = s,x,x,
psdir = ${docdir}
sbindir = ${exec_prefix}/sbin
sharedstatedir = ${prefix}/com
srcdir = .
sysconfdir = ${prefix}/etc
target_alias =
top_build_prefix = ../../
top_builddir = ../..
top_srcdir = ../..
AM_CPPFLAGS = $(DEPS_CFLAGS)
noinst_LTLIBRARIES = libstat.la
libstat_la_SOURCES = AbsCopulaSmoother.cc AbsDistribution1D.cc \
AbsDistributionND.cc amiseOptimalBandwidth.cc CompositeDistribution1D.cc \
CompositeDistributionND.cc CopulaInterpolationND.cc Copulas.cc \
Distribution1DFactory.cc Distribution1DReader.cc DistributionNDReader.cc \
Distributions1D.cc DistributionsND.cc CrossCovarianceAccumulator.cc \
fitSbParameters.cc StatAccumulatorArr.cc HistoAxis.cc \
InterpolatedDistribution1D.cc JohnsonCurves.cc JohnsonKDESmoother.cc \
LocalPolyFilter1D.cc logLikelihoodPeak.cc PolyFilterCollection1D.cc \
SbMomentsBigGamma.cc SbMomentsCalculator.cc ScalableGaussND.cc \
SequentialCopulaSmoother.cc SequentialPolyFilterND.cc StatAccumulator.cc \
UnitMapInterpolationND.cc WeightedStatAccumulator.cc AbsNtuple.cc \
QuadraticOrthoPolyND.cc NMCombinationSequencer.cc Filter1DBuilders.cc \
StatAccumulatorPair.cc GridRandomizer.cc ConstantBandwidthSmoother1D.cc \
GaussianMixture1D.cc HistoNDCdf.cc scanDensityAsWeight.cc NUHistoAxis.cc \
distributionReadError.cc WeightedStatAccumulatorPair.cc convertAxis.cc \
ProductSymmetricBetaNDCdf.cc DualHistoAxis.cc BinSummary.cc \
StorableMultivariateFunctor.cc StorableMultivariateFunctorReader.cc \
TruncatedDistribution1D.cc neymanPearsonWindow1D.cc QuantileTable1D.cc \
LOrPEMarginalSmoother.cc LeftCensoredDistribution.cc \
RightCensoredDistribution.cc AbsDiscreteDistribution1D.cc \
DiscreteDistribution1DReader.cc DiscreteDistributions1D.cc \
BernsteinFilter1DBuilder.cc BetaFilter1DBuilder.cc
INCLUDES = -I../../
include_HEADERS = AbsBandwidthCV.hh \
AbsCompositeDistroBuilder.hh \
AbsCompositeDistroBuilder.icc \
AbsCopulaSmoother.hh \
AbsCopulaSmoother.icc \
AbsDiscreteDistribution1D.hh \
AbsDistribution1D.hh \
AbsDistributionND.hh \
AbsDistributionND.icc \
AbsFilter1DBuilder.hh \
AbsGridInterpolatedDistribution.hh \
AbsLossCalculator.hh \
AbsMarginalSmoother.hh \
AbsMarginalSmoother.icc \
AbsNtuple.hh \
AbsNtuple.icc \
AbsPolyFilter1D.hh \
AbsPolyFilterND.hh \
amiseOptimalBandwidth.hh \
amiseOptimalBandwidth.icc \
ArchivedNtuple.hh \
ArchivedNtuple.icc \
ArrayProjectors.hh \
ArrayProjectors.icc \
arrayStats.hh \
arrayStats.icc \
BandwidthCVLeastSquares1D.hh \
BandwidthCVLeastSquares1D.icc \
BandwidthCVLeastSquaresND.hh \
BandwidthCVLeastSquaresND.icc \
BandwidthCVPseudoLogli1D.hh \
BandwidthCVPseudoLogli1D.icc \
BandwidthCVPseudoLogliND.hh \
BandwidthCVPseudoLogliND.icc \
BernsteinFilter1DBuilder.hh \
- bernsteinOptimalBandwidth.hh \
- bernsteinOptimalBandwidth.icc \
+ betaKernelsBandwidth.hh \
+ betaKernelsBandwidth.icc \
BetaFilter1DBuilder.hh \
BinSummary.hh \
BinSummary.icc \
CensoredQuantileRegression.hh \
CensoredQuantileRegression.icc \
Column.hh \
Column.icc \
CompositeDistribution1D.hh \
CompositeDistributionND.hh \
CompositeDistributionND.icc \
CompositeDistros1D.hh \
ConstantBandwidthSmoother1D.hh \
ConstantBandwidthSmootherND.hh \
ConstantBandwidthSmootherND.icc \
convertAxis.hh \
CopulaInterpolationND.hh \
Copulas.hh \
CrossCovarianceAccumulator.hh \
CrossCovarianceAccumulator.icc \
CVCopulaSmoother.hh \
CVCopulaSmoother.icc \
DensityScanND.hh \
DiscreteDistribution1DReader.hh \
DiscreteDistributions1D.hh \
Distribution1DFactory.hh \
Distribution1DReader.hh \
DistributionNDReader.hh \
Distributions1D.hh \
Distributions1D.icc \
DistributionsND.hh \
DistributionsND.icc \
distributionReadError.hh \
DualHistoAxis.hh \
empiricalCopula.hh \
empiricalCopulaHisto.hh \
empiricalCopulaHisto.icc \
empiricalCopula.icc \
Filter1DBuilders.hh \
FitUtils.hh \
FitUtils.icc \
GaussianMixture1D.hh \
griddedRobustRegression.hh \
griddedRobustRegression.icc \
GriddedRobustRegressionStop.hh \
GridInterpolatedDistribution.hh \
GridInterpolatedDistribution.icc \
GridRandomizer.hh \
HistoAxis.hh \
HistoND.hh \
HistoND.icc \
HistoNDCdf.hh \
HistoNDFunctorInstances.hh \
histoStats.hh \
histoStats.icc \
InMemoryNtuple.hh \
InMemoryNtuple.icc \
InterpolatedDistribution1D.hh \
interpolateHistoND.hh \
interpolateHistoND.icc \
InterpolationFunctorInstances.hh \
JohnsonCurves.hh \
JohnsonKDESmoother.hh \
KDECopulaSmoother.hh \
KDECopulaSmoother.icc \
KDEFilterND.hh \
KDEFilterND.icc \
kendallsTau.hh \
kendallsTau.icc \
LeftCensoredDistribution.hh \
LocalLogisticRegression.hh \
LocalLogisticRegression.icc \
LocalPolyFilter1D.hh \
LocalPolyFilter1D.icc \
LocalPolyFilterND.hh \
LocalPolyFilterND.icc \
LocalQuadraticLeastSquaresND.hh \
LocalQuadraticLeastSquaresND.icc \
LocalQuantileRegression.hh \
LocalQuantileRegression.icc \
logLikelihoodPeak.hh \
LOrPECopulaSmoother.hh \
LOrPECopulaSmoother.icc \
LOrPEMarginalSmoother.hh \
lorpeSmooth1D.hh \
lorpeSmooth1D.icc \
mergeTwoHistos.hh \
mergeTwoHistos.icc \
mirrorWeight.hh \
MultivariateSumAccumulator.hh \
MultivariateSumsqAccumulator.hh \
MultivariateSumsqAccumulator.icc \
MultivariateWeightedSumAccumulator.hh \
MultivariateWeightedSumsqAccumulator.hh \
MultivariateWeightedSumsqAccumulator.icc \
neymanPearsonWindow1D.hh \
NMCombinationSequencer.hh \
NonparametricCompositeBuilder.hh \
NonparametricCompositeBuilder.icc \
NtHistoFill.hh \
NtNtupleFill.hh \
NtRectangularCut.hh \
NtRectangularCut.icc \
NtupleBuffer.hh \
NtupleBuffer.icc \
NtupleRecordTypes.hh \
NtupleRecordTypesFwd.hh \
NtupleReference.hh \
NUHistoAxis.hh \
OrderedPointND.hh \
OrderedPointND.icc \
PolyFilterCollection1D.hh \
PolyFilterCollection1D.icc \
ProductSymmetricBetaNDCdf.hh \
QuadraticOrthoPolyND.hh \
QuadraticOrthoPolyND.icc \
QuantileRegression1D.hh \
QuantileRegression1D.icc \
QuantileTable1D.hh \
RightCensoredDistribution.hh \
SampleAccumulator.hh \
SampleAccumulator.icc \
SbMomentsCalculator.hh \
ScalableGaussND.hh \
scanDensityAsWeight.hh \
SequentialCopulaSmoother.hh \
SequentialPolyFilterND.hh \
SequentialPolyFilterND.icc \
spearmansRho.hh \
spearmansRho.icc \
StatAccumulator.hh \
StatAccumulatorArr.hh \
StatAccumulatorPair.hh \
StatUtils.hh \
StatUtils.icc \
StorableHistoNDFunctor.hh \
StorableHistoNDFunctor.icc \
StorableInterpolationFunctor.hh \
StorableInterpolationFunctor.icc \
StorableMultivariateFunctor.hh \
StorableMultivariateFunctorReader.hh \
TruncatedDistribution1D.hh \
TwoPointsLTSLoss.hh \
TwoPointsLTSLoss.icc \
UnitMapInterpolationND.hh \
variableBandwidthSmooth1D.hh \
variableBandwidthSmooth1D.icc \
WeightedLTSLoss.hh \
WeightedLTSLoss.icc \
WeightedSampleAccumulator.hh \
WeightedSampleAccumulator.icc \
WeightedStatAccumulator.hh \
WeightedStatAccumulatorPair.hh
EXTRA_DIST = 00README.txt npstat_doxy.hh
all: all-am
.SUFFIXES:
.SUFFIXES: .cc .lo .o .obj
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \
&& { if test -f $@; then exit 0; else break; fi; }; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign npstat/stat/Makefile'; \
$(am__cd) $(top_srcdir) && \
$(AUTOMAKE) --foreign npstat/stat/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(am__aclocal_m4_deps):
clean-noinstLTLIBRARIES:
-test -z "$(noinst_LTLIBRARIES)" || rm -f $(noinst_LTLIBRARIES)
@list='$(noinst_LTLIBRARIES)'; \
locs=`for p in $$list; do echo $$p; done | \
sed 's|^[^/]*$$|.|; s|/[^/]*$$||; s|$$|/so_locations|' | \
sort -u`; \
test -z "$$locs" || { \
echo rm -f $${locs}; \
rm -f $${locs}; \
}
libstat.la: $(libstat_la_OBJECTS) $(libstat_la_DEPENDENCIES) $(EXTRA_libstat_la_DEPENDENCIES)
$(CXXLINK) $(libstat_la_OBJECTS) $(libstat_la_LIBADD) $(LIBS)
mostlyclean-compile:
-rm -f *.$(OBJEXT)
distclean-compile:
-rm -f *.tab.c
include ./$(DEPDIR)/AbsCopulaSmoother.Plo
include ./$(DEPDIR)/AbsDiscreteDistribution1D.Plo
include ./$(DEPDIR)/AbsDistribution1D.Plo
include ./$(DEPDIR)/AbsDistributionND.Plo
include ./$(DEPDIR)/AbsNtuple.Plo
include ./$(DEPDIR)/BernsteinFilter1DBuilder.Plo
include ./$(DEPDIR)/BetaFilter1DBuilder.Plo
include ./$(DEPDIR)/BinSummary.Plo
include ./$(DEPDIR)/CompositeDistribution1D.Plo
include ./$(DEPDIR)/CompositeDistributionND.Plo
include ./$(DEPDIR)/ConstantBandwidthSmoother1D.Plo
include ./$(DEPDIR)/CopulaInterpolationND.Plo
include ./$(DEPDIR)/Copulas.Plo
include ./$(DEPDIR)/CrossCovarianceAccumulator.Plo
include ./$(DEPDIR)/DiscreteDistribution1DReader.Plo
include ./$(DEPDIR)/DiscreteDistributions1D.Plo
include ./$(DEPDIR)/Distribution1DFactory.Plo
include ./$(DEPDIR)/Distribution1DReader.Plo
include ./$(DEPDIR)/DistributionNDReader.Plo
include ./$(DEPDIR)/Distributions1D.Plo
include ./$(DEPDIR)/DistributionsND.Plo
include ./$(DEPDIR)/DualHistoAxis.Plo
include ./$(DEPDIR)/Filter1DBuilders.Plo
include ./$(DEPDIR)/GaussianMixture1D.Plo
include ./$(DEPDIR)/GridRandomizer.Plo
include ./$(DEPDIR)/HistoAxis.Plo
include ./$(DEPDIR)/HistoNDCdf.Plo
include ./$(DEPDIR)/InterpolatedDistribution1D.Plo
include ./$(DEPDIR)/JohnsonCurves.Plo
include ./$(DEPDIR)/JohnsonKDESmoother.Plo
include ./$(DEPDIR)/LOrPEMarginalSmoother.Plo
include ./$(DEPDIR)/LeftCensoredDistribution.Plo
include ./$(DEPDIR)/LocalPolyFilter1D.Plo
include ./$(DEPDIR)/NMCombinationSequencer.Plo
include ./$(DEPDIR)/NUHistoAxis.Plo
include ./$(DEPDIR)/PolyFilterCollection1D.Plo
include ./$(DEPDIR)/ProductSymmetricBetaNDCdf.Plo
include ./$(DEPDIR)/QuadraticOrthoPolyND.Plo
include ./$(DEPDIR)/QuantileTable1D.Plo
include ./$(DEPDIR)/RightCensoredDistribution.Plo
include ./$(DEPDIR)/SbMomentsBigGamma.Plo
include ./$(DEPDIR)/SbMomentsCalculator.Plo
include ./$(DEPDIR)/ScalableGaussND.Plo
include ./$(DEPDIR)/SequentialCopulaSmoother.Plo
include ./$(DEPDIR)/SequentialPolyFilterND.Plo
include ./$(DEPDIR)/StatAccumulator.Plo
include ./$(DEPDIR)/StatAccumulatorArr.Plo
include ./$(DEPDIR)/StatAccumulatorPair.Plo
include ./$(DEPDIR)/StorableMultivariateFunctor.Plo
include ./$(DEPDIR)/StorableMultivariateFunctorReader.Plo
include ./$(DEPDIR)/TruncatedDistribution1D.Plo
include ./$(DEPDIR)/UnitMapInterpolationND.Plo
include ./$(DEPDIR)/WeightedStatAccumulator.Plo
include ./$(DEPDIR)/WeightedStatAccumulatorPair.Plo
include ./$(DEPDIR)/amiseOptimalBandwidth.Plo
include ./$(DEPDIR)/convertAxis.Plo
include ./$(DEPDIR)/distributionReadError.Plo
include ./$(DEPDIR)/fitSbParameters.Plo
include ./$(DEPDIR)/logLikelihoodPeak.Plo
include ./$(DEPDIR)/neymanPearsonWindow1D.Plo
include ./$(DEPDIR)/scanDensityAsWeight.Plo
.cc.o:
$(CXXCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $<
$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po
# source='$<' object='$@' libtool=no \
# DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) \
# $(CXXCOMPILE) -c -o $@ $<
.cc.obj:
$(CXXCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'`
$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po
# source='$<' object='$@' libtool=no \
# DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) \
# $(CXXCOMPILE) -c -o $@ `$(CYGPATH_W) '$<'`
.cc.lo:
$(LTCXXCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $<
$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Plo
# source='$<' object='$@' libtool=yes \
# DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) \
# $(LTCXXCOMPILE) -c -o $@ $<
mostlyclean-libtool:
-rm -f *.lo
clean-libtool:
-rm -rf .libs _libs
install-includeHEADERS: $(include_HEADERS)
@$(NORMAL_INSTALL)
@list='$(include_HEADERS)'; test -n "$(includedir)" || list=; \
if test -n "$$list"; then \
echo " $(MKDIR_P) '$(DESTDIR)$(includedir)'"; \
$(MKDIR_P) "$(DESTDIR)$(includedir)" || exit 1; \
fi; \
for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
echo "$$d$$p"; \
done | $(am__base_list) | \
while read files; do \
echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(includedir)'"; \
$(INSTALL_HEADER) $$files "$(DESTDIR)$(includedir)" || exit $$?; \
done
uninstall-includeHEADERS:
@$(NORMAL_UNINSTALL)
@list='$(include_HEADERS)'; test -n "$(includedir)" || list=; \
files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \
dir='$(DESTDIR)$(includedir)'; $(am__uninstall_files_from_dir)
ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) '{ files[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in files) print i; }; }'`; \
mkid -fID $$unique
tags: TAGS
TAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
set x; \
here=`pwd`; \
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) '{ files[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in files) print i; }; }'`; \
shift; \
if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
if test $$# -gt 0; then \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
"$$@" $$unique; \
else \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$unique; \
fi; \
fi
ctags: CTAGS
CTAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) '{ files[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in files) print i; }; }'`; \
test -z "$(CTAGS_ARGS)$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& $(am__cd) $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) "$$here"
cscopelist: $(HEADERS) $(SOURCES) $(LISP)
list='$(SOURCES) $(HEADERS) $(LISP)'; \
case "$(srcdir)" in \
[\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \
*) sdir=$(subdir)/$(srcdir) ;; \
esac; \
for i in $$list; do \
if test -f "$$i"; then \
echo "$(subdir)/$$i"; \
else \
echo "$$sdir/$$i"; \
fi; \
done >> $(top_builddir)/cscope.files
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d "$(distdir)/$$file"; then \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
else \
test -f "$(distdir)/$$file" \
|| cp -p $$d/$$file "$(distdir)/$$file" \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-am
all-am: Makefile $(LTLIBRARIES) $(HEADERS)
installdirs:
for dir in "$(DESTDIR)$(includedir)"; do \
test -z "$$dir" || $(MKDIR_P) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
if test -z '$(STRIP)'; then \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
install; \
else \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
"INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
fi
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-generic clean-libtool clean-noinstLTLIBRARIES \
mostlyclean-am
distclean: distclean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
distclean-am: clean-am distclean-compile distclean-generic \
distclean-tags
dvi: dvi-am
dvi-am:
html: html-am
html-am:
info: info-am
info-am:
install-data-am: install-includeHEADERS
install-dvi: install-dvi-am
install-dvi-am:
install-exec-am:
install-html: install-html-am
install-html-am:
install-info: install-info-am
install-info-am:
install-man:
install-pdf: install-pdf-am
install-pdf-am:
install-ps: install-ps-am
install-ps-am:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-compile mostlyclean-generic \
mostlyclean-libtool
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am: uninstall-includeHEADERS
.MAKE: install-am install-strip
.PHONY: CTAGS GTAGS all all-am check check-am clean clean-generic \
clean-libtool clean-noinstLTLIBRARIES cscopelist ctags \
distclean distclean-compile distclean-generic \
distclean-libtool distclean-tags distdir dvi dvi-am html \
html-am info info-am install install-am install-data \
install-data-am install-dvi install-dvi-am install-exec \
install-exec-am install-html install-html-am \
install-includeHEADERS install-info install-info-am \
install-man install-pdf install-pdf-am install-ps \
install-ps-am install-strip installcheck installcheck-am \
installdirs maintainer-clean maintainer-clean-generic \
mostlyclean mostlyclean-compile mostlyclean-generic \
mostlyclean-libtool pdf pdf-am ps ps-am tags uninstall \
uninstall-am uninstall-includeHEADERS
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:
Index: trunk/npstat/stat/betaKernelsBandwidth.hh
===================================================================
--- trunk/npstat/stat/betaKernelsBandwidth.hh (revision 0)
+++ trunk/npstat/stat/betaKernelsBandwidth.hh (revision 102)
@@ -0,0 +1,62 @@
+#ifndef NPSTAT_BETAKERNELSBANDWIDTH_HH_
+#define NPSTAT_BETAKERNELSBANDWIDTH_HH_
+
+/*!
+// \file betaKernelsBandwidth.hh
+//
+// \brief Optimal bandwidth for density estimation with beta kernels
+//
+// The formulae implemented in this code come from the paper by
+// S.X. Chen, "Beta kernel estimators for density functions",
+// Computational Statistics & Data Analysis 31, pp. 131-145 (1999).
+// Note that printed versions of formulae for AMISE contain algebraic
+// mistakes. These formulae have instead been rederived starting from
+// equations 4.2 and 4.3 in the paper.
+//
+// Bandwidth values returned by this function may not be optimal at all
+// for finite samples as the precision of curve estimation by sums of
+// beta functions drops very sharply as a function of derivative number
+// (only the terms proportional to the first and the second derivatives
+// are considered in the paper which is sufficient for asymptotic reasoning).
+// Thus the returned bandwidth values should be used as an approximate guide
+// only, perhaps as starting points for a cross validation bandwidth scan.
+// Both b1* and b2* should be calculated -- the difference between them
+// gives an idea about potential spread of the optimal bandwidth.
+//
+// Author: I. Volobouev
+//
+// June 2013
+*/
+
+namespace npstat {
+ /**
+ // AMISE optimal bandwidth for density estimation by beta kernels.
+ // The arguments are as follows:
+ //
+ // npoints -- Number of points in the data sample.
+ //
+ // fvalues -- Array of scanned values of the reference density.
+ // It is assumed that the density is scanned at the
+ // bin centers on the [0, 1] interval.
+ //
+ // nValues -- Number of elements in the array "fvalues".
+ //
+ // returnB2Star -- If "true", the function will return b_2* from Chen's
+ // paper (and corresponding AMISE), otherwise it will
+ // return b_1* (using corrected algebra).
+ //
+ // expectedAmise -- If this argument is provided, it will be filled
+ // with the expected AMISE value.
+ //
+ // The generalized Bernstein polynomial degree is simply the inverse
+ // of the bandwidth.
+ */
+ template<typename Real>
+ double betaKernelsBandwidth(double npoints, const Real* fvalues,
+ unsigned long nValues, bool returnB2Star,
+ double* expectedAmise = 0);
+}
+
+#include "npstat/stat/betaKernelsBandwidth.icc"
+
+#endif // NPSTAT_BETAKERNELSBANDWIDTH_HH_
Index: trunk/npstat/stat/Makefile.am
===================================================================
--- trunk/npstat/stat/Makefile.am (revision 101)
+++ trunk/npstat/stat/Makefile.am (revision 102)
@@ -1,222 +1,222 @@
AM_CPPFLAGS = $(DEPS_CFLAGS)
noinst_LTLIBRARIES = libstat.la
libstat_la_SOURCES = AbsCopulaSmoother.cc AbsDistribution1D.cc \
AbsDistributionND.cc amiseOptimalBandwidth.cc CompositeDistribution1D.cc \
CompositeDistributionND.cc CopulaInterpolationND.cc Copulas.cc \
Distribution1DFactory.cc Distribution1DReader.cc DistributionNDReader.cc \
Distributions1D.cc DistributionsND.cc CrossCovarianceAccumulator.cc \
fitSbParameters.cc StatAccumulatorArr.cc HistoAxis.cc \
InterpolatedDistribution1D.cc JohnsonCurves.cc JohnsonKDESmoother.cc \
LocalPolyFilter1D.cc logLikelihoodPeak.cc PolyFilterCollection1D.cc \
SbMomentsBigGamma.cc SbMomentsCalculator.cc ScalableGaussND.cc \
SequentialCopulaSmoother.cc SequentialPolyFilterND.cc StatAccumulator.cc \
UnitMapInterpolationND.cc WeightedStatAccumulator.cc AbsNtuple.cc \
QuadraticOrthoPolyND.cc NMCombinationSequencer.cc Filter1DBuilders.cc \
StatAccumulatorPair.cc GridRandomizer.cc ConstantBandwidthSmoother1D.cc \
GaussianMixture1D.cc HistoNDCdf.cc scanDensityAsWeight.cc NUHistoAxis.cc \
distributionReadError.cc WeightedStatAccumulatorPair.cc convertAxis.cc \
ProductSymmetricBetaNDCdf.cc DualHistoAxis.cc BinSummary.cc \
StorableMultivariateFunctor.cc StorableMultivariateFunctorReader.cc \
TruncatedDistribution1D.cc neymanPearsonWindow1D.cc QuantileTable1D.cc \
LOrPEMarginalSmoother.cc LeftCensoredDistribution.cc \
RightCensoredDistribution.cc AbsDiscreteDistribution1D.cc \
DiscreteDistribution1DReader.cc DiscreteDistributions1D.cc \
BernsteinFilter1DBuilder.cc BetaFilter1DBuilder.cc
INCLUDES = -I@top_srcdir@/
includedir = ${prefix}/include/npstat/stat
include_HEADERS = AbsBandwidthCV.hh \
AbsCompositeDistroBuilder.hh \
AbsCompositeDistroBuilder.icc \
AbsCopulaSmoother.hh \
AbsCopulaSmoother.icc \
AbsDiscreteDistribution1D.hh \
AbsDistribution1D.hh \
AbsDistributionND.hh \
AbsDistributionND.icc \
AbsFilter1DBuilder.hh \
AbsGridInterpolatedDistribution.hh \
AbsLossCalculator.hh \
AbsMarginalSmoother.hh \
AbsMarginalSmoother.icc \
AbsNtuple.hh \
AbsNtuple.icc \
AbsPolyFilter1D.hh \
AbsPolyFilterND.hh \
amiseOptimalBandwidth.hh \
amiseOptimalBandwidth.icc \
ArchivedNtuple.hh \
ArchivedNtuple.icc \
ArrayProjectors.hh \
ArrayProjectors.icc \
arrayStats.hh \
arrayStats.icc \
BandwidthCVLeastSquares1D.hh \
BandwidthCVLeastSquares1D.icc \
BandwidthCVLeastSquaresND.hh \
BandwidthCVLeastSquaresND.icc \
BandwidthCVPseudoLogli1D.hh \
BandwidthCVPseudoLogli1D.icc \
BandwidthCVPseudoLogliND.hh \
BandwidthCVPseudoLogliND.icc \
BernsteinFilter1DBuilder.hh \
- bernsteinOptimalBandwidth.hh \
- bernsteinOptimalBandwidth.icc \
+ betaKernelsBandwidth.hh \
+ betaKernelsBandwidth.icc \
BetaFilter1DBuilder.hh \
BinSummary.hh \
BinSummary.icc \
CensoredQuantileRegression.hh \
CensoredQuantileRegression.icc \
Column.hh \
Column.icc \
CompositeDistribution1D.hh \
CompositeDistributionND.hh \
CompositeDistributionND.icc \
CompositeDistros1D.hh \
ConstantBandwidthSmoother1D.hh \
ConstantBandwidthSmootherND.hh \
ConstantBandwidthSmootherND.icc \
convertAxis.hh \
CopulaInterpolationND.hh \
Copulas.hh \
CrossCovarianceAccumulator.hh \
CrossCovarianceAccumulator.icc \
CVCopulaSmoother.hh \
CVCopulaSmoother.icc \
DensityScanND.hh \
DiscreteDistribution1DReader.hh \
DiscreteDistributions1D.hh \
Distribution1DFactory.hh \
Distribution1DReader.hh \
DistributionNDReader.hh \
Distributions1D.hh \
Distributions1D.icc \
DistributionsND.hh \
DistributionsND.icc \
distributionReadError.hh \
DualHistoAxis.hh \
empiricalCopula.hh \
empiricalCopulaHisto.hh \
empiricalCopulaHisto.icc \
empiricalCopula.icc \
Filter1DBuilders.hh \
FitUtils.hh \
FitUtils.icc \
GaussianMixture1D.hh \
griddedRobustRegression.hh \
griddedRobustRegression.icc \
GriddedRobustRegressionStop.hh \
GridInterpolatedDistribution.hh \
GridInterpolatedDistribution.icc \
GridRandomizer.hh \
HistoAxis.hh \
HistoND.hh \
HistoND.icc \
HistoNDCdf.hh \
HistoNDFunctorInstances.hh \
histoStats.hh \
histoStats.icc \
InMemoryNtuple.hh \
InMemoryNtuple.icc \
InterpolatedDistribution1D.hh \
interpolateHistoND.hh \
interpolateHistoND.icc \
InterpolationFunctorInstances.hh \
JohnsonCurves.hh \
JohnsonKDESmoother.hh \
KDECopulaSmoother.hh \
KDECopulaSmoother.icc \
KDEFilterND.hh \
KDEFilterND.icc \
kendallsTau.hh \
kendallsTau.icc \
LeftCensoredDistribution.hh \
LocalLogisticRegression.hh \
LocalLogisticRegression.icc \
LocalPolyFilter1D.hh \
LocalPolyFilter1D.icc \
LocalPolyFilterND.hh \
LocalPolyFilterND.icc \
LocalQuadraticLeastSquaresND.hh \
LocalQuadraticLeastSquaresND.icc \
LocalQuantileRegression.hh \
LocalQuantileRegression.icc \
logLikelihoodPeak.hh \
LOrPECopulaSmoother.hh \
LOrPECopulaSmoother.icc \
LOrPEMarginalSmoother.hh \
lorpeSmooth1D.hh \
lorpeSmooth1D.icc \
mergeTwoHistos.hh \
mergeTwoHistos.icc \
mirrorWeight.hh \
MultivariateSumAccumulator.hh \
MultivariateSumsqAccumulator.hh \
MultivariateSumsqAccumulator.icc \
MultivariateWeightedSumAccumulator.hh \
MultivariateWeightedSumsqAccumulator.hh \
MultivariateWeightedSumsqAccumulator.icc \
neymanPearsonWindow1D.hh \
NMCombinationSequencer.hh \
NonparametricCompositeBuilder.hh \
NonparametricCompositeBuilder.icc \
NtHistoFill.hh \
NtNtupleFill.hh \
NtRectangularCut.hh \
NtRectangularCut.icc \
NtupleBuffer.hh \
NtupleBuffer.icc \
NtupleRecordTypes.hh \
NtupleRecordTypesFwd.hh \
NtupleReference.hh \
NUHistoAxis.hh \
OrderedPointND.hh \
OrderedPointND.icc \
PolyFilterCollection1D.hh \
PolyFilterCollection1D.icc \
ProductSymmetricBetaNDCdf.hh \
QuadraticOrthoPolyND.hh \
QuadraticOrthoPolyND.icc \
QuantileRegression1D.hh \
QuantileRegression1D.icc \
QuantileTable1D.hh \
RightCensoredDistribution.hh \
SampleAccumulator.hh \
SampleAccumulator.icc \
SbMomentsCalculator.hh \
ScalableGaussND.hh \
scanDensityAsWeight.hh \
SequentialCopulaSmoother.hh \
SequentialPolyFilterND.hh \
SequentialPolyFilterND.icc \
spearmansRho.hh \
spearmansRho.icc \
StatAccumulator.hh \
StatAccumulatorArr.hh \
StatAccumulatorPair.hh \
StatUtils.hh \
StatUtils.icc \
StorableHistoNDFunctor.hh \
StorableHistoNDFunctor.icc \
StorableInterpolationFunctor.hh \
StorableInterpolationFunctor.icc \
StorableMultivariateFunctor.hh \
StorableMultivariateFunctorReader.hh \
TruncatedDistribution1D.hh \
TwoPointsLTSLoss.hh \
TwoPointsLTSLoss.icc \
UnitMapInterpolationND.hh \
variableBandwidthSmooth1D.hh \
variableBandwidthSmooth1D.icc \
WeightedLTSLoss.hh \
WeightedLTSLoss.icc \
WeightedSampleAccumulator.hh \
WeightedSampleAccumulator.icc \
WeightedStatAccumulator.hh \
WeightedStatAccumulatorPair.hh
EXTRA_DIST = 00README.txt npstat_doxy.hh
Index: trunk/npstat/stat/betaKernelsBandwidth.icc
===================================================================
--- trunk/npstat/stat/betaKernelsBandwidth.icc (revision 0)
+++ trunk/npstat/stat/betaKernelsBandwidth.icc (revision 102)
@@ -0,0 +1,90 @@
+#include <cmath>
+#include <cassert>
+#include <stdexcept>
+#include <cfloat>
+
+#include "npstat/nm/LinearMapper1d.hh"
+#include "npstat/nm/definiteIntegrals.hh"
+
+namespace npstat {
+ template<typename Real>
+ double betaKernelsBandwidth(
+ const double npoints, const Real* fvalues, const unsigned long nValues,
+ const bool returnB2Star, double* expectedAmise)
+ {
+ assert(fvalues);
+ if (npoints <= 0.0) throw std::invalid_argument(
+ "In npstat::amiseOptimalOptimalBandwidth: "
+ "number of data points must be positive");
+ if (nValues < 3UL) throw std::invalid_argument(
+ "In npstat::amiseOptimalOptimalBandwidth: "
+ "insufficient number of function scan points");
+
+ const double binwidth = 1.0/nValues;
+ const double bwsquared = binwidth*binwidth;
+ const unsigned long nVMinus1 = nValues - 1UL;
+
+ long double firstterm = 0.0L;
+ {
+ const LinearMapper1d m(0.5*binwidth, fvalues[0],
+ 1.5*binwidth, fvalues[1]);
+ firstterm += definiteIntegral_1(m.a(), m.b(), 0.0, 0.5*binwidth);
+ }
+ for (unsigned long i=0; i<nVMinus1; ++i)
+ {
+ const double xmin = (i + 0.5)*binwidth;
+ const double xmax = xmin + binwidth;
+ const LinearMapper1d m(xmin, fvalues[i], xmax, fvalues[i+1UL]);
+ firstterm += definiteIntegral_1(m.a(), m.b(), xmin, xmax);
+ }
+ {
+ const LinearMapper1d m(1.0 - 1.5*binwidth, fvalues[nValues-2UL],
+ 1.0 - 0.5*binwidth, fvalues[nValues-1UL]);
+ firstterm += definiteIntegral_1(m.a(), m.b(), 1.0-0.5*binwidth, 1.0);
+ }
+ firstterm /= (2.0*sqrt(M_PI));
+
+ long double secondterm = 0.0L;
+ for (unsigned long i=0; i<nValues; ++i)
+ {
+ const double x = (i + 0.5)*binwidth;
+
+ double deri1 = 0.0;
+ if (i == 0UL)
+ deri1 = (fvalues[1] - fvalues[0])/binwidth;
+ else if (i == nVMinus1)
+ deri1 = (fvalues[i] - fvalues[i - 1UL])/binwidth;
+ else
+ deri1 = (fvalues[i + 1UL] - fvalues[i - 1UL])/2.0/binwidth;
+
+ unsigned long ic = i;
+ if (ic == 0UL)
+ ++ic;
+ else if (ic == nVMinus1)
+ --ic;
+ const double deri2 = ((fvalues[ic+1]-fvalues[ic]) +
+ (fvalues[ic-1]-fvalues[ic]))/bwsquared;
+
+ double tmp = 0.5*x*(1.0 - x)*deri2;
+ if (!returnB2Star)
+ tmp += (1.0 - 2.0*x)*deri1;
+ secondterm += tmp*tmp;
+ }
+ secondterm *= binwidth;
+
+ double bstar = DBL_MAX;
+ if (secondterm)
+ bstar = pow(firstterm/secondterm/4.0/npoints, 0.4);
+
+ if (expectedAmise)
+ {
+ if (secondterm)
+ *expectedAmise = bstar*bstar*secondterm +
+ firstterm/npoints/sqrt(bstar);
+ else
+ *expectedAmise = 0.0;
+ }
+
+ return bstar;
+ }
+}
Index: trunk/NEWS
===================================================================
--- trunk/NEWS (revision 101)
+++ trunk/NEWS (revision 102)
@@ -1,202 +1,202 @@
Version 2.2.0 - development
* Added classes DiscreteBernsteinPoly1D and BernsteinFilter1DBuilder.
* Added classes DiscreteBeta1D and BetaFilter1DBuilder.
* Added BifurcatedGauss1D class to model Gaussian-like distributions with
different sigmas on the left and right sides.
* Added virtual destructors to the classes declared in the Filter1DBuilders.hh
header.
* Added a method to the Matrix template to calculate Frobenius norm.
* Added methods to the Matrix template to calculate row and column sums.
* Added "directSum" method to the Matrix template.
* Added constructor from a subrange of another matrix to the Matrix template.
* Added code to the LocalPolyFilter1D class that generates a doubly
stochastic filter out of an arbitrary filter.
* Added "npstat/nm/definiteIntegrals.hh" header and corresponding .cc file
for various infrequently used integrals.
-* Added "bernsteinOptimalBandwidth" function.
+* Added "betaKernelsBandwidth" function.
Version 2.1.0 - June 20 2013, by I. Volobouev
* Fixed couple problems which showed up in the robust regression code
due to compiler update.
* Fixed CensoredQuantileRegressionOnKDTree::process method (needed this->
dereference for some member).
Version 2.0.0 - June 15 2013, by I. Volobouev
* Updated to use "Geners" version 1.3.0. A few interfaces were changed
(API for the string archives was removed because Geners own string archive
facilities are now adequate) so the major version number was bumped up.
Version 1.6.0 - June 12 2013, by I. Volobouev
* Updated some documentation.
* Updated fitCompositeJohnson.icc to use simplified histogram constructors.
* Bug fix in the "minuitLocalQuantileRegression1D" function.
* Changed the "quantileBinFromCdf" function to use unsigned long type for
array indices.
* Added "weightedLocalQuantileRegression1D" function (namespace npsi) for
local regression with single predictor on weighted points.
Version 1.5.0 - May 23 2013, by I. Volobouev
* Added interfaces to LAPACK routines DSYEVD, DSYEVR, and corresponding
single precision versions.
* Added the "symPSDefEffectiveRank" method to the Matrix class for
calculating effective ranks of symmetric positive semidefinite matrices.
* Added converting constructor and assignment operator to the Matrix class.
* Run the Gram-Schmidt procedure twice when orthogonal polynomials are
derived in order to improve orthogonality.
Version 1.4.0 - May 20 2013, by I. Volobouev
* Added the "append" method to the AbsNtuple class.
Version 1.3.0 - May 10 2013, by I. Volobouev
* Added the code for Hermite polynomial series.
* Improved random number generator for the 1-d Gaussian distribution.
* Added a framework for discrete 1-d distributions as well as two
concrete distribution classes (Poisson1D, DiscreteTabulated1D).
* Added functions "readCompressedStringArchiveExt" and
"writeCompressedStringArchiveExt" which can read/write either compressed
or uncompressed string archives, distinguished by file extension.
Version 1.2.1 - March 22 2013, by I. Volobouev
* Improved CmdLine.hh in the "examples/C++" directory.
* Added class QuantileTable1D.
* Added classes LeftCensoredDistribution and RightCensoredDistribution.
Version 1.2.0 - March 13 2013, by I. Volobouev
* Added convenience "fill" methods to work with the ntuples which have
small number of columns (up to 10).
* Fixed a bug in AbsRandomGenerator for univariate generators making
multivariate points.
* Added LOrPEMarginalSmoother class.
Version 1.1.1 - March 11 2013, by I. Volobouev
* Added utility function "symbetaLOrPEFilter1D" which creates 1-d LOrPE
filters using kernels from the symmetric beta family (and the Gaussian).
* Added high level driver function "lorpeSmooth1D".
* Allowed variables with zero variances for calculation of correlation
coefficients in "MultivariateSumsqAccumulator". Such variables will
have zero correlation coefficients with all other variables.
* Added rebinning constructor to the HistoND class.
Version 1.1.0 - March 8 2013, by I. Volobouev
* Changed NUHistoAxis::fltBinNumber method to produce correct results
with interpolation degree 0. It is not yet obvious which method would
work best for higher interpolation degrees.
* Added functions for converting between StringArchive and python bytearray.
They have been placed in a new header: wrap/stringArchiveToBinary.hh.
* Added methods "exportMemSlice" and "importMemSlice" to ArrayND. These
methods allow for filling array slices from unstructured memory buffers
and for exporting array slices to such memory buffers.
* Added "simpleColumnNames" function (header file AbsNtuple.hh) to generate
trivial column names when ntuple column names are not important.
* Added functions "neymanPearsonWindow1D" and "signalToBgMaximum1D".
They are declared in a new header npstat/neymanPearsonWindow1D.hh.
Version 1.0.5 - December 17 2012, by I. Volobouev
* Flush string archives before writing them out in stringArchiveIO.cc.
* Added class TruncatedDistribution1D.
Version 1.0.4 - November 14 2012, by I. Volobouev
* Added utilities for reading/writing Geners string archives to files.
* Added BinSummary class.
* Doxygen documentation improved. Every header file in stat, nm, rng,
and interfaces now has a brief description.
Version 1.0.3 - September 27 2012, by I. Volobouev
* Fixed some bugs related to moving StorableMultivariateFunctor code
from "nm" to "stat".
Version 1.0.2 - August 6 2012, by I. Volobouev
* Added converting copy constructor to the "LinInterpolatedTableND" class.
* Added StorableMultivariateFunctor class (together with the corresponding
reader class).
* Added StorableInterpolationFunctor class which inherits from the above
and can be used with interpolation tables.
* Added StorableHistoNDFunctor class which inherits from
StorableMultivariateFunctor and can be used to interpolate histogram bins.
* Added "transpose" method to HistoND class.
* Created DualAxis class.
* Created DualHistoAxis class.
* Added conversion functions between histogram and grid axes.
* Added "mergeTwoHistos" function for smooth merging of two histograms.
* Added "ProductSymmetricBetaNDCdf" functor to be used as weight in
merging histograms.
* Added CoordinateSelector class.
Version 1.0.1 - June 29 2012, by I. Volobouev
* Implemented class LinInterpolatedTableND with related supporting code.

File Metadata

Mime Type
text/x-diff
Expires
Wed, May 14, 11:37 AM (13 h, 55 m)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
5111466
Default Alt Text
(376 KB)

Event Timeline