IAP GITLAB

Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • AirShowerPhysics/corsika
  • rulrich/corsika
  • AAAlvesJr/corsika
  • Andre/corsika
  • arrabito/corsika
  • Nikos/corsika
  • olheiser73/corsika
  • AirShowerPhysics/papers/corsika
  • pranav/corsika
9 results
Show changes
Showing
with 2088 additions and 14751 deletions
# CORSIKA 8 Framework for Particle Cascades in Astroparticle Physics
The purpose of CORSIKA 8 is to simulate any particle cascades in
astroparticle physics or astrophysical context. A lot of emphasis has been
put on modularity, flexibility, completeness, validation and
correctness. To boost computational efficiency, different techniques
are provided, like thinning or cascade equations. The aim is that
CORSIKA 8 remains the most comprehensive framework for simulating
particle cascades with stochastic and continuous processes.
The software makes extensive use of static design patterns and
compiler optimization. Thus, the most fundamental configuration
decisions of the user must be performed at compile time. At run time,
model parameters can still be changed.
CORSIKA 8 is by default released under the BSD 3-Clause License. See [license
file](https://gitlab.iap.kit.edu/AirShowerPhysics/corsika/blob/master/LICENSE)
which is part of every release and the source code.
If you use, or want to refer to, CORSIKA 8 please cite ["Towards a Next
Generation of CORSIKA: A Framework for the Simulation of Particle
Cascades in Astroparticle Physics", Comput. Softw. Big Sci. 3 (2019)
2](https://doi.org/10.1007/s41781-018-0013-0) as well as
["Simulating radio emission from particle cascades with CORSIKA 8", Astropart. Phys. 166 (2025)
103072](https://doi.org/10.1016/j.astropartphys.2024.103072).
We kindly ask (and require) any relevant improvement or addition to be offered or
contributed to the main CORSIKA 8 repository for the benefit of the
whole community.
CORSIKA 8 makes use of various third-party code, in particular interaction
models. Please check the [using and collaborating
agreement](https://gitlab.iap.kit.edu/AirShowerPhysics/corsika/blob/master/USING_COLLABORATING.md)
for further information on this topic.
If you plan to contribute to CORSIKA 8, please check the guidelines outlined here:
[coding
guidelines](https://gitlab.iap.kit.edu/AirShowerPhysics/corsika/blob/master/CONTRIBUTING.md). Code
that fails the review by the CORSIKA 8 author group must be improved
before it can be merged in the official code base. After your code has
been accepted and merged, you become a contributor of the CORSIKA 8
project (code author).
IMPORTANT: Before you contribute, you need to read and agree to the conditions set out in the
[using and collaborating
agreement](https://gitlab.iap.kit.edu/AirShowerPhysics/corsika/blob/master/USING_COLLABORATING.md).
The agreement can be discussed, and eventually improved if necessary.
## Get in contact
* Join our chat threads using Mattermost via this [invite link](https://mattermost.hzdr.de/signup_user_complete/?id=xtdd8jyt6trbiezt71gaz3z4ge&md=link&sbr=su). Click the `GitLab` button, then `Sign in with Helmholtz ID`. You will be able to make an account by either finding your institution, or using your e.g. ORCID, GitHub, or Google account.
* Connect to https://gitlab.iap.kit.edu, register yourself and join the "Air Shower Physics" group. Write to us on Mattermost (in the User Questions channel), or directly contact one of the [steering comittee members](https://gitlab.iap.kit.edu/AirShowerPhysics/corsika/-/wikis/Steering-Committee) in case there are problems with that.
* Connect to corsika-devel@lists.kit.edu (self-register at
https://www.lists.kit.edu/sympa/subscribe/corsika-devel) to get in
touch with the project.
## Installation
CORSIKA 8 is tested regularly at least on `gcc11.0.0` and `clang-14.0.0`.
### Prerequisites
You will also need:
- Python 3 (supported versions are Python >= 3.6), with pip
- cmake > 3.4
- git
- g++, gfortran, binutils, make
- optional: FLUKA (see below)
On a bare Ubuntu machine, just add:
``` shell
sudo apt-get install python3 python3-pip cmake g++ gfortran git doxygen graphviz
```
### Creating a virtual environment and Conan
It is recommended that you install CORSIKA 8 and its dependencies within a python3 virtual environment.
To do so, you can run the following.
``` shell
# Create the environment using your native python3 binary
python3 -m venv /path/to/new/virtual/environment/corsika-8
# Load the environment (should be run each time you open a new terminal)
source /path/to/new/virtual/environment/corsika-8/bin/activate
```
You will need to load the environment each time that you open a new terminal.
CORSIKA 8 uses the [conan](https://conan.io/) package manager to
manage our dependencies. Currently, version 2.50.0 or higher is required.
**Note**: if you are NOT using a virtual environment, you may want to use the `pip install --user` flag.
``` shell
pip install conan particle==0.25.1 numpy
```
### Enabling FLUKA support
For legal reasons we do not distribute/bundle FLUKA together with CORSIKA 8.
As FLUKA is the standard low-energy hadronic interaction model for CORSIKA 8, you have to download
it separately from (http://www.fluka.org/), which requires registering there as FLUKA user.
The following should be done *before* compiling CORSIKA 8:
1. Note your system's version of gfortran (`gfortran --version`) and glibc (`ldd --version`)
2. Download the FLUKA __binary__, ensuring that it matches the versions you found above
3. Download the FLUKA __data file__ (will be named something similar to __fluka20xy.z-data.tar.gz__).
4. Un-tar the files that you downloaded using `tar -xf <filename>`
5. Set environmental variables `export FLUFOR=gfortran` and `export FLUPRO=<path to where you unzipped the files>`. Note that the `FLUPRO` directory should contain __libflukahp.a__. Both of these variables will have to be set every time you open a new terminal.
6. Go to the `FLUPRO` directory and run `make`. This will compile an exe, __flukahp__, in your current directory.
7. Follow the normal steps to compile CORSIKA 8 (see below).
When you later install CORSIKA 8, you should see a message during the __cmake__ step indicating the FLUKA was correctly found.
``` shell
libflukahp.a found in directory <some location here> via FLUPRO environment variable
FLUKA support is enabled.
```
### Compiling CORSIKA 8
Once Conan is installed and FLUKA provided, follow these steps to download and install CORSIKA 8:
``` shell
cd ./top/directory/for/corsika/installation
git clone --recursive git@gitlab.iap.kit.edu:AirShowerPhysics/corsika.git
# Or for https: git clone --recursive https://gitlab.iap.kit.edu/AirShowerPhysics/corsika.git
mkdir corsika-build
cd corsika-build
../corsika/conan-install.sh --source-directory ../corsika --release-with-debug
# conan-install.sh takes required options from command line to install dependencies for 'Debug', 'Release' and 'RelWithDebInfo' builds.
../corsika/corsika-cmake.sh -c "-DCMAKE_BUILD_TYPE="RelWithDebInfo" -DWITH_FLUKA=ON -DCMAKE_INSTALL_PREFIX=../corsika-install"
make -j4 #The number should match the number of available cores on your machine
make install
```
## Alternate installation using docker containers
There are docker containers prepared that bring all the environment and packages you need to run CORSIKA. See [docker hub](https://hub.docker.com/repository/docker/corsika/devel) for a complete overview.
### Prerequisites
You only need docker, e.g. on Ubuntu: `sudo apt-get install docker` and of course root access.
### Compiling
Follow these steps to download and install CORSIKA 8, master development version
```shell
cd ./top/directory/for/corsika/installation
git clone --recursive git@gitlab.iap.kit.edu:AirShowerPhysics/corsika.git
sudo docker run -v $PWD:/corsika -it corsika/devel:clang-8 /bin/bash
mkdir corsika-build
cd corsika-build
../corsika/conan-install.sh --source-directory ../corsika --release-with-debug
# conan-install.sh takes required options from command line to install dependencies for 'Debug', 'Release' and 'RelWithDebInfo' builds.
../corsika/corsika-cmake.sh -c "-DCMAKE_BUILD_TYPE="RelWithDebInfo" -DWITH_FLUKA=ON -DCMAKE_INSTALL_PREFIX=../corsika-install"
make -j4 #The number should match the number of available cores on your machine
make install
```
## Running Unit Tests
To run the unit tests, do the following.
```shell
cd ./corsika-build
ctest -j4 #The number should match the number of available cores on your machine
```
## Running applications and examples
### Standard applications
Applications for standard use-cases are located in the `applications` directory.
These are example scripts that can be used directly or slightly modified for your use case.
See [applications/README.md] for more.
The applications are compiled automatically after running `make` and will appear your `corsika-build/bin` directory.
After running `make install` the binaries will also be copied into your `corsika-install/bin` directory as well.
For example, from inside your `corsika-install/bin` directory, run
```shell
c8_air_shower --pdg 2212 -E 1e5 -f my_shower
```
This will run a vertical 100 TeV proton shower and will create and put the output into `./my_shower`.
### Building the examples
Unlike the applications, the examples must be compiled as a second step.
From your top corsika directory, (the one that includes `corsika-build` and `corsika-install`) run
```shell
export CONAN_DEPENDENCIES=$PWD/corsika-install/lib/cmake/dependencies
cmake -DCMAKE_TOOLCHAIN_FILE=${CONAN_DEPENDENCIES}/conan_toolchain.cmake -DCMAKE_PREFIX_PATH=${CONAN_DEPENDENCIES} -DCMAKE_POLICY_DEFAULT_CMP0091=NEW -DCMAKE_BUILD_TYPE=RelWithDebInfo -Dcorsika_DIR=$PWD/corsika-build -DWITH_FLUKA=ON -S $PWD/corsika/examples -B $PWD/corsika-build-examples
cd corsika-build-examples
make -j4 #The number should match the number of available cores on your machine
```
You can run the examples by using the binaries in `corsika-build-examples/bin/`.
For example:
```shell
corsika-build-examples/bin/known_particles
```
This will print out all of the particles that are known by CORSIKA.
### Generating doxygen documentation
To generate the documentation, you need doxygen and graphviz. If you work with
the docker corsika/devel containers this is already included.
Otherwise, e.g. on Ubuntu machines, do:
```shell
sudo apt-get install doxygen graphviz
```
Switch to the `corsika-build` directory and do
```shell
make docs
make install
```
open with firefox:
```shell
firefox ../corsika-install/share/corsika/doc/html/index.html
```
add_library (CORSIKAthirdparty INTERFACE)
target_include_directories (CORSIKAthirdparty SYSTEM
INTERFACE
$<BUILD_INTERFACE:${PROJECT_SOURCE_DIR}/ThirdParty>
$<INSTALL_INTERFACE:include/ThirdParty>
)
install (DIRECTORY phys DESTINATION include/ThirdParty/)
install (DIRECTORY catch2 DESTINATION include/ThirdParty/)
/**
@page ThirdParty
@tableofcontents
In the directory ThirdParty we provide simple dependencies. This
minimizes the need to install additional software for the user. Note
the individual copyrights and licences here!
@section PhysUnits
The PhysUnits library is an external dependency included here just for
convenience:
Original source code from: https://github.com/martinmoene/PhysUnits-CT-Cpp11#references
Licence: BSL-1.0 (https://github.com/martinmoene/PhysUnits-CT-Cpp11/blob/master/LICENSE_1_0.txt)
References: https://github.com/martinmoene/PhysUnits-CT-Cpp11#references
@section catch2
The catch2 unit testing library is from: https://github.com/catchorg/Catch2
Licence: BSL-1.0 (https://github.com/martinmoene/PhysUnits-CT-Cpp11/blob/master/LICENSE_1_0.txt)
References: https://github.com/catchorg/Catch2
*/
Source diff could not be displayed: it is too large. Options to address this: view the blob.
/**
* \file quantity.hpp
*
* \brief Zero-overhead dimensional analysis and unit/quantity manipulation and conversion.
* \author Michael S. Kenniston, Martin Moene
* \date 7 September 2013
* \since 0.4
*
* Copyright 2013 Universiteit Leiden. All rights reserved.
*
* Copyright (c) 2001 by Michael S. Kenniston. For the most
* recent version check www.xnet.com/~msk/quantity. Permission is granted
* to use this code without restriction so long as this copyright
* notice appears in all source files.
*
* This code is provided as-is, with no warrantee of correctness.
*
* Distributed under the Boost Software License, Version 1.0. (See accompanying
* file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
*/
/*
* Unless otherwise specified, the definitions of all units in this
* file are from NIST Special Publication 811, found online at
* http://physics.nist.gov/Document/sp811.pdf
* Other sources: OED = Oxford English Dictionary
*/
#ifndef PHYS_UNITS_QUANTITY_HPP_INCLUDED
#define PHYS_UNITS_QUANTITY_HPP_INCLUDED
#include <cmath>
#include <cstdlib>
#include <utility> // std::declval
/// namespace phys.
namespace phys {
/// namespace units.
namespace units {
#ifdef PHYS_UNITS_REP_TYPE
using Rep = PHYS_UNITS_REP_TYPE;
#else
using Rep = double;
#endif
/*
* declare now, define later.
*/
template< typename Dims, typename T = Rep >
class quantity;
/**
* We could drag dimensions around individually, but it's much more convenient to package them.
*/
template< int D1, int D2, int D3, int D4 = 0, int D5 = 0, int D6 = 0, int D7 = 0 >
struct dimensions
{
enum
{
dim1 = D1,
dim2 = D2,
dim3 = D3,
dim4 = D4,
dim5 = D5,
dim6 = D6,
dim7 = D7,
is_all_zero =
D1 == 0 && D2 == 0 && D3 == 0 && D4 == 0 && D5 == 0 && D6 == 0 && D7 == 0,
is_base =
1 == (D1 != 0) + (D2 != 0) + (D3 != 0) + (D4 != 0) + (D5 != 0) + (D6 != 0) + (D7 != 0) &&
1 == D1 + D2 + D3 + D4 + D5 + D6 + D7,
};
template< int R1, int R2, int R3, int R4, int R5, int R6, int R7 >
constexpr bool operator==( dimensions<R1, R2, R3, R4, R5, R6, R7> const & ) const
{
return D1==R1 && D2==R2 && D3==R3 && D4==R4 && D5==R5 && D6==R6 && D7==R7;
}
template< int R1, int R2, int R3, int R4, int R5, int R6, int R7 >
constexpr bool operator!=( dimensions<R1, R2, R3, R4, R5, R6, R7> const & rhs ) const
{
return !( *this == rhs );
}
};
/// demensionless 'dimension'.
typedef dimensions< 0, 0, 0 > dimensionless_d;
/// namespace detail.
namespace detail {
/**
* \brief The "collapse" template is used to avoid quantity< dimensions< 0, 0, 0 > >,
* i.e. to make dimensionless results come out as type "Rep".
*/
template< typename D, typename T >
struct collapse
{
typedef quantity< D, T > type;
};
template< typename T >
struct collapse< dimensionless_d, T >
{
typedef T type;
};
template< typename D, typename T >
using Collapse = typename collapse<D,T>::type;
// promote types of expression to result type.
template < typename X, typename Y >
using PromoteAdd = decltype( std::declval<X>() + std::declval<Y>() );
template < typename X, typename Y >
using PromoteMul = decltype( std::declval<X>() * std::declval<Y>() );
/*
* The following batch of structs are type generators to calculate
* the correct type of the result of various operations.
*/
/**
* product type generator.
*/
template< typename DX, typename DY, typename T >
struct product
{
enum
{
d1 = DX::dim1 + DY::dim1,
d2 = DX::dim2 + DY::dim2,
d3 = DX::dim3 + DY::dim3,
d4 = DX::dim4 + DY::dim4,
d5 = DX::dim5 + DY::dim5,
d6 = DX::dim6 + DY::dim6,
d7 = DX::dim7 + DY::dim7,
};
typedef Collapse< dimensions< d1, d2, d3, d4, d5, d6, d7 >, T > type;
};
template< typename DX, typename DY, typename X, typename Y>
using Product = typename product<DX, DY, PromoteMul<X,Y>>::type;
/**
* quotient type generator.
*/
template< typename DX, typename DY, typename T >
struct quotient
{
enum
{
d1 = DX::dim1 - DY::dim1,
d2 = DX::dim2 - DY::dim2,
d3 = DX::dim3 - DY::dim3,
d4 = DX::dim4 - DY::dim4,
d5 = DX::dim5 - DY::dim5,
d6 = DX::dim6 - DY::dim6,
d7 = DX::dim7 - DY::dim7,
};
typedef Collapse< dimensions< d1, d2, d3, d4, d5, d6, d7 >, T > type;
};
template< typename DX, typename DY, typename X, typename Y>
using Quotient = typename quotient<DX, DY, PromoteMul<X,Y>>::type;
/**
* reciprocal type generator.
*/
template< typename D, typename T >
struct reciprocal
{
enum
{
d1 = - D::dim1,
d2 = - D::dim2,
d3 = - D::dim3,
d4 = - D::dim4,
d5 = - D::dim5,
d6 = - D::dim6,
d7 = - D::dim7,
};
typedef Collapse< dimensions< d1, d2, d3, d4, d5, d6, d7 >, T > type;
};
template< typename D, typename X, typename Y>
using Reciprocal = typename reciprocal<D, PromoteMul<X,Y>>::type;
/**
* power type generator.
*/
template< typename D, int N, typename T >
struct power
{
enum
{
d1 = N * D::dim1,
d2 = N * D::dim2,
d3 = N * D::dim3,
d4 = N * D::dim4,
d5 = N * D::dim5,
d6 = N * D::dim6,
d7 = N * D::dim7,
};
typedef Collapse< dimensions< d1, d2, d3, d4, d5, d6, d7 >, T > type;
};
template< typename D, int N, typename T >
using Power = typename detail::power< D, N, T >::type;
/**
* root type generator.
*/
template< typename D, int N, typename T >
struct root
{
enum
{
all_even_multiples =
D::dim1 % N == 0 &&
D::dim2 % N == 0 &&
D::dim3 % N == 0 &&
D::dim4 % N == 0 &&
D::dim5 % N == 0 &&
D::dim6 % N == 0 &&
D::dim7 % N == 0
};
enum
{
d1 = D::dim1 / N,
d2 = D::dim2 / N,
d3 = D::dim3 / N,
d4 = D::dim4 / N,
d5 = D::dim5 / N,
d6 = D::dim6 / N,
d7 = D::dim7 / N
};
typedef Collapse< dimensions< d1, d2, d3, d4, d5, d6, d7 >, T > type;
};
template< typename D, int N, typename T >
using Root = typename detail::root< D, N, T >::type;
/**
* tag to construct a quantity from a magnitude.
*/
constexpr struct magnitude_tag_t{} magnitude_tag{};
} // namespace detail
/**
* \brief class "quantity" is the heart of the library. It associates
* dimensions with a single "Rep" data member and protects it from
* dimensionally inconsistent use.
*/
template< typename Dims, typename T /*= Rep */ >
class quantity
{
public:
typedef Dims dimension_type;
typedef T value_type;
typedef quantity<Dims, T> this_type;
constexpr quantity() : m_value{} { }
/**
* public converting initializing constructor;
* requires magnitude_tag to prevent constructing a quantity from a raw magnitude.
*/
template <typename X>
constexpr explicit quantity( detail::magnitude_tag_t, X x )
: m_value( x ) { }
/**
* converting copy-assignment constructor.
*/
template <typename X >
constexpr quantity( quantity<Dims, X> const & x )
: m_value( x.magnitude() ) { }
// /**
// * convert to compatible unit, for example: (3._dm).to(meter) gives 0.3;
// */
// constexpr value_type to( quantity const & x ) const { return *this / x; }
/**
* convert to given unit, for example: (3._dm).to(meter) gives 0.3;
*/
template <typename DX, typename X>
constexpr auto to( quantity<DX,X> const & x ) const -> detail::Quotient<Dims,DX,T,X>
{
return *this / x;
}
/**
* the quantity's magnitude.
*/
constexpr value_type magnitude() const { return m_value; }
/**
* the quantity's dimensions.
*/
constexpr dimension_type dimension() const { return dimension_type{}; }
/**
* We need a "zero" of each type -- for comparisons, to initialize running
* totals, etc. Note: 0 m != 0 kg, since they are of different dimensionality.
* zero is really just defined for convenience, since
* quantity< length_d >::zero == 0 * meter, etc.
*/
static constexpr quantity zero() { return quantity{ value_type( 0.0 ) }; }
// static constexpr quantity zero = quantity{ value_type( 0.0 ) };
private:
/**
* private initializing constructor.
*/
constexpr explicit quantity( value_type x ) : m_value{ x } { }
private:
value_type m_value;
enum { has_dimension = ! Dims::is_all_zero };
// static_assert( has_dimension, "quantity dimensions must not all be zero" ); // MR: removed
private:
// friends:
// arithmetic
template <typename D, typename X, typename Y>
friend constexpr quantity<D, X> &
operator+=( quantity<D, X> & x, quantity<D, Y> const & y );
template <typename D, typename X>
friend constexpr quantity<D, X>
operator+( quantity<D, X> const & x );
template< typename D, typename X, typename Y >
friend constexpr quantity <D, detail::PromoteAdd<X,Y>>
operator+( quantity<D, X> const & x, quantity<D, Y> const & y );
template <typename D, typename X, typename Y>
friend constexpr quantity<D, X> &
operator-=( quantity<D, X> & x, quantity<D, Y> const & y );
template <typename D, typename X>
friend constexpr quantity<D, X>
operator-( quantity<D, X> const & x );
template< typename D, typename X, typename Y >
friend constexpr quantity <D, detail::PromoteAdd<X,Y>>
operator-( quantity<D, X> const & x, quantity<D, Y> const & y );
template< typename D, typename X, typename Y>
friend constexpr quantity<D, X> &
operator*=( quantity<D, X> & x, const Y & y );
template <typename D, typename X, typename Y>
friend constexpr quantity<D, detail::PromoteMul<X,Y>>
operator*( quantity<D, X> const & x, const Y & y );
template <typename D, typename X, typename Y>
friend constexpr quantity< D, detail::PromoteMul<X,Y> >
operator*( const X & x, quantity<D, Y> const & y );
template <typename DX, typename DY, typename X, typename Y>
friend constexpr detail::Product<DX, DY, X, Y>
operator*( quantity<DX, X> const & lhs, quantity< DY, Y > const & rhs );
template< typename D, typename X, typename Y>
friend constexpr quantity<D, X> &
operator/=( quantity<D, X> & x, const Y & y );
template <typename D, typename X, typename Y>
friend constexpr quantity<D, detail::PromoteMul<X,Y>>
operator/( quantity<D, X> const & x, const Y & y );
template <typename D, typename X, typename Y>
friend constexpr detail::Reciprocal<D, X, Y>
operator/( const X & x, quantity<D, Y> const & y );
template <typename DX, typename DY, typename X, typename Y>
friend constexpr detail::Quotient<DX, DY, X, Y>
operator/( quantity<DX, X> const & x, quantity< DY, Y > const & y );
// absolute value.
template <typename D, typename X>
friend constexpr quantity<D,X>
abs( quantity<D,X> const & x );
// powers and roots
template <int N, typename D, typename X>
friend detail::Power<D, N, X>
nth_power( quantity<D, X> const & x );
template <typename D, typename X>
friend constexpr detail::Power<D, 2, X>
square( quantity<D, X> const & x );
template <typename D, typename X>
friend constexpr detail::Power<D, 3, X>
cube( quantity<D, X> const & x );
template <int N, typename D, typename X>
friend detail::Root<D, N, X>
nth_root( quantity<D, X> const & x );
template <typename D, typename X>
friend detail::Root< D, 2, X >
sqrt( quantity<D, X> const & x );
// comparison
template <typename D, typename X, typename Y>
friend constexpr bool operator==( quantity<D, X> const & x, quantity<D, Y> const & y );
template <typename D, typename X, typename Y>
friend constexpr bool operator!=( quantity<D, X> const & x, quantity<D, Y> const & y );
template <typename D, typename X, typename Y>
friend constexpr bool operator<( quantity<D, X> const & x, quantity<D, Y> const & y );
template <typename D, typename X, typename Y>
friend constexpr bool operator<=( quantity<D, X> const & x, quantity<D, Y> const & y );
template <typename D, typename X, typename Y>
friend constexpr bool operator>( quantity<D, X> const & x, quantity<D, Y> const & y );
template <typename D, typename X, typename Y>
friend constexpr bool operator>=( quantity<D, X> const & x, quantity<D, Y> const & y );
};
// Give names to the seven fundamental dimensions of physical reality.
typedef dimensions< 1, 0, 0, 0, 0, 0, 0 > length_d;
typedef dimensions< 0, 1, 0, 0, 0, 0, 0 > mass_d;
typedef dimensions< 0, 0, 1, 0, 0, 0, 0 > time_interval_d;
typedef dimensions< 0, 0, 0, 1, 0, 0, 0 > electric_current_d;
typedef dimensions< 0, 0, 0, 0, 1, 0, 0 > thermodynamic_temperature_d;
typedef dimensions< 0, 0, 0, 0, 0, 1, 0 > amount_of_substance_d;
typedef dimensions< 0, 0, 0, 0, 0, 0, 1 > luminous_intensity_d;
// Addition operators
/// quan += quan
template <typename D, typename X, typename Y>
constexpr quantity<D, X> &
operator+=( quantity<D, X> & x, quantity<D, Y> const & y )
{
return x.m_value += y.m_value, x;
}
/// + quan
template <typename D, typename X>
constexpr quantity<D, X>
operator+( quantity<D, X> const & x )
{
return quantity<D, X >( +x.m_value );
}
/// quan + quan
template< typename D, typename X, typename Y >
constexpr quantity <D, detail::PromoteAdd<X,Y>>
operator+( quantity<D, X> const & x, quantity<D, Y> const & y )
{
return quantity<D, detail::PromoteAdd<X,Y>>( x.m_value + y.m_value );
}
// Subtraction operators
/// quan -= quan
template <typename D, typename X, typename Y>
constexpr quantity<D, X> &
operator-=( quantity<D, X> & x, quantity<D, Y> const & y )
{
return x.m_value -= y.m_value, x;
}
/// - quan
template <typename D, typename X>
constexpr quantity<D, X>
operator-( quantity<D, X> const & x )
{
return quantity<D, X >( -x.m_value );
}
/// quan - quan
template< typename D, typename X, typename Y >
constexpr quantity <D, detail::PromoteAdd<X,Y>>
operator-( quantity<D, X> const & x, quantity<D, Y> const & y )
{
return quantity<D, detail::PromoteAdd<X,Y>>( x.m_value - y.m_value );
}
// Multiplication operators
/// quan *= num
template< typename D, typename X, typename Y>
constexpr quantity<D, X> &
operator*=( quantity<D, X> & x, const Y & y )
{
return x.m_value *= y, x;
}
/// quan * num
template <typename D, typename X, typename Y>
constexpr quantity<D, detail::PromoteMul<X,Y>>
operator*( quantity<D, X> const & x, const Y & y )
{
return quantity<D, detail::PromoteMul<X,Y>>( x.m_value * y );
}
/// num * quan
template <typename D, typename X, typename Y>
constexpr quantity< D, detail::PromoteMul<X,Y> >
operator*( const X & x, quantity<D, Y> const & y )
{
return quantity<D, detail::PromoteMul<X,Y>>( x * y.m_value );
}
/// quan * quan:
template <typename DX, typename DY, typename X, typename Y>
constexpr detail::Product<DX, DY, X, Y>
operator*( quantity<DX, X> const & lhs, quantity< DY, Y > const & rhs )
{
return detail::Product<DX, DY, X, Y>( lhs.m_value * rhs.m_value );
}
// Division operators
/// quan /= num
template< typename D, typename X, typename Y>
constexpr quantity<D, X> &
operator/=( quantity<D, X> & x, const Y & y )
{
return x.m_value /= y, x;
}
/// quan / num
template <typename D, typename X, typename Y>
constexpr quantity<D, detail::PromoteMul<X,Y>>
operator/( quantity<D, X> const & x, const Y & y )
{
return quantity<D, detail::PromoteMul<X,Y>>( x.m_value / y );
}
/// num / quan
template <typename D, typename X, typename Y>
constexpr detail::Reciprocal<D, X, Y>
operator/( const X & x, quantity<D, Y> const & y )
{
return detail::Reciprocal<D, X, Y>( x / y.m_value );
}
/// quan / quan:
template <typename DX, typename DY, typename X, typename Y>
constexpr detail::Quotient<DX, DY, X, Y>
operator/( quantity<DX, X> const & x, quantity< DY, Y > const & y )
{
return detail::Quotient<DX, DY, X, Y>( x.m_value / y.m_value );
}
/// absolute value.
template <typename D, typename X>
constexpr quantity<D,X> abs( quantity<D,X> const & x )
{
return quantity<D,X>( std::abs( x.m_value ) );
}
// General powers
/// N-th power.
template <int N, typename D, typename X>
detail::Power<D, N, X>
nth_power( quantity<D, X> const & x )
{
return detail::Power<D, N, X>( std::pow( x.m_value, X( N ) ) );
}
// Low powers defined separately for efficiency.
/// square.
template <typename D, typename X>
constexpr detail::Power<D, 2, X>
square( quantity<D, X> const & x )
{
return x * x;
}
/// cube.
template <typename D, typename X>
constexpr detail::Power<D, 3, X>
cube( quantity<D, X> const & x )
{
return x * x * x;
}
// General root
/// n-th root.
template <int N, typename D, typename X>
detail::Root<D, N, X>
nth_root( quantity<D, X> const & x )
{
static_assert( detail::root<D, N, X>::all_even_multiples, "root result dimensions must be integral" );
return detail::Root<D, N, X>( std::pow( x.m_value, X( 1.0 ) / N ) );
}
// Low roots defined separately for convenience.
/// square root.
template <typename D, typename X>
detail::Root< D, 2, X >
sqrt( quantity<D, X> const & x )
{
static_assert(
detail::root<D, 2, X >::all_even_multiples, "root result dimensions must be integral" );
return detail::Root<D, 2, X>( std::pow( x.m_value, X( 1.0 ) / 2 ) );
}
// Comparison operators
/// equality.
template <typename D, typename X, typename Y>
constexpr bool
operator==( quantity<D, X> const & x, quantity<D, Y> const & y )
{
return x.m_value == y.m_value;
}
/// inequality.
template <typename D, typename X, typename Y>
constexpr bool
operator!=( quantity<D, X> const & x, quantity<D, Y> const & y )
{
return x.m_value != y.m_value;
}
/// less-than.
template <typename D, typename X, typename Y>
constexpr bool
operator<( quantity<D, X> const & x, quantity<D, Y> const & y )
{
return x.m_value < y.m_value;
}
/// less-equal.
template <typename D, typename X, typename Y>
constexpr bool
operator<=( quantity<D, X> const & x, quantity<D, Y> const & y )
{
return x.m_value <= y.m_value;
}
/// greater-than.
template <typename D, typename X, typename Y>
constexpr bool
operator>( quantity<D, X> const & x, quantity<D, Y> const & y )
{
return x.m_value > y.m_value;
}
/// greater-equal.
template <typename D, typename X, typename Y>
constexpr bool
operator>=( quantity<D, X> const & x, quantity<D, Y> const & y )
{
return x.m_value >= y.m_value;
}
/// quantity's dimension.
template <typename DX, typename X>
inline constexpr DX dimension( quantity<DX,X> const & q ) { return q.dimension(); }
/// quantity's magnitude.
template <typename DX, typename X>
inline constexpr X magnitude( quantity<DX,X> const & q ) { return q.magnitude(); }
// The seven SI base units. These tie our numbers to the real world.
constexpr quantity<length_d > meter { detail::magnitude_tag, 1.0 };
constexpr quantity<mass_d > kilogram{ detail::magnitude_tag, 1.0 };
constexpr quantity<time_interval_d > second { detail::magnitude_tag, 1.0 };
constexpr quantity<electric_current_d > ampere { detail::magnitude_tag, 1.0 };
constexpr quantity<thermodynamic_temperature_d> kelvin { detail::magnitude_tag, 1.0 };
constexpr quantity<amount_of_substance_d > mole { detail::magnitude_tag, 1.0 };
constexpr quantity<luminous_intensity_d > candela { detail::magnitude_tag, 1.0 };
// The standard SI prefixes.
constexpr long double yotta = 1e+24L;
constexpr long double zetta = 1e+21L;
constexpr long double exa = 1e+18L;
constexpr long double peta = 1e+15L;
constexpr long double tera = 1e+12L;
constexpr long double giga = 1e+9L;
constexpr long double mega = 1e+6L;
constexpr long double kilo = 1e+3L;
constexpr long double hecto = 1e+2L;
constexpr long double deka = 1e+1L;
constexpr long double deci = 1e-1L;
constexpr long double centi = 1e-2L;
constexpr long double milli = 1e-3L;
constexpr long double micro = 1e-6L;
constexpr long double nano = 1e-9L;
constexpr long double pico = 1e-12L;
constexpr long double femto = 1e-15L;
constexpr long double atto = 1e-18L;
constexpr long double zepto = 1e-21L;
constexpr long double yocto = 1e-24L;
// Binary prefixes, pending adoption.
constexpr long double kibi = 1024;
constexpr long double mebi = 1024 * kibi;
constexpr long double gibi = 1024 * mebi;
constexpr long double tebi = 1024 * gibi;
constexpr long double pebi = 1024 * tebi;
constexpr long double exbi = 1024 * pebi;
constexpr long double zebi = 1024 * exbi;
constexpr long double yobi = 1024 * zebi;
// The rest of the standard dimensional types, as specified in SP811.
using absorbed_dose_d = dimensions< 2, 0, -2 >;
using absorbed_dose_rate_d = dimensions< 2, 0, -3 >;
using acceleration_d = dimensions< 1, 0, -2 >;
using activity_of_a_nuclide_d = dimensions< 0, 0, -1 >;
using angular_velocity_d = dimensions< 0, 0, -1 >;
using angular_acceleration_d = dimensions< 0, 0, -2 >;
using area_d = dimensions< 2, 0, 0 >;
using capacitance_d = dimensions< -2, -1, 4, 2 >;
using concentration_d = dimensions< -3, 0, 0, 0, 0, 1 >;
using current_density_d = dimensions< -2, 0, 0, 1 >;
using dose_equivalent_d = dimensions< 2, 0, -2 >;
using dynamic_viscosity_d = dimensions< -1, 1, -1 >;
using electric_charge_d = dimensions< 0, 0, 1, 1 >;
using electric_charge_density_d = dimensions< -3, 0, 1, 1 >;
using electric_conductance_d = dimensions< -2, -1, 3, 2 >;
using electric_field_strenth_d = dimensions< 1, 1, -3, -1 >;
using electric_flux_density_d = dimensions< -2, 0, 1, 1 >;
using electric_potential_d = dimensions< 2, 1, -3, -1 >;
using electric_resistance_d = dimensions< 2, 1, -3, -2 >;
using energy_d = dimensions< 2, 1, -2 >;
using energy_density_d = dimensions< -1, 1, -2 >;
using exposure_d = dimensions< 0, -1, 1, 1 >;
using force_d = dimensions< 1, 1, -2 >;
using frequency_d = dimensions< 0, 0, -1 >;
using heat_capacity_d = dimensions< 2, 1, -2, 0, -1 >;
using heat_density_d = dimensions< 0, 1, -2 >;
using heat_density_flow_rate_d = dimensions< 0, 1, -3 >;
using heat_flow_rate_d = dimensions< 2, 1, -3 >;
using heat_flux_density_d = dimensions< 0, 1, -3 >;
using heat_transfer_coefficient_d = dimensions< 0, 1, -3, 0, -1 >;
using illuminance_d = dimensions< -2, 0, 0, 0, 0, 0, 1 >;
using inductance_d = dimensions< 2, 1, -2, -2 >;
using irradiance_d = dimensions< 0, 1, -3 >;
using kinematic_viscosity_d = dimensions< 2, 0, -1 >;
using luminance_d = dimensions< -2, 0, 0, 0, 0, 0, 1 >;
using luminous_flux_d = dimensions< 0, 0, 0, 0, 0, 0, 1 >;
using magnetic_field_strength_d = dimensions< -1, 0, 0, 1 >;
using magnetic_flux_d = dimensions< 2, 1, -2, -1 >;
using magnetic_flux_density_d = dimensions< 0, 1, -2, -1 >;
using magnetic_permeability_d = dimensions< 1, 1, -2, -2 >;
using mass_density_d = dimensions< -3, 1, 0 >;
using mass_flow_rate_d = dimensions< 0, 1, -1 >;
using molar_energy_d = dimensions< 2, 1, -2, 0, 0, -1 >;
using molar_entropy_d = dimensions< 2, 1, -2, -1, 0, -1 >;
using moment_of_force_d = dimensions< 2, 1, -2 >;
using permittivity_d = dimensions< -3, -1, 4, 2 >;
using power_d = dimensions< 2, 1, -3 >;
using pressure_d = dimensions< -1, 1, -2 >;
using radiance_d = dimensions< 0, 1, -3 >;
using radiant_intensity_d = dimensions< 2, 1, -3 >;
using speed_d = dimensions< 1, 0, -1 >;
using specific_energy_d = dimensions< 2, 0, -2 >;
using specific_heat_capacity_d = dimensions< 2, 0, -2, 0, -1 >;
using specific_volume_d = dimensions< 3, -1, 0 >;
using substance_permeability_d = dimensions< -1, 0, 1 >;
using surface_tension_d = dimensions< 0, 1, -2 >;
using thermal_conductivity_d = dimensions< 1, 1, -3, 0, -1 >;
using thermal_diffusivity_d = dimensions< 2, 0, -1 >;
using thermal_insulance_d = dimensions< 0, -1, 3, 0, 1 >;
using thermal_resistance_d = dimensions< -2, -1, 3, 0, 1 >;
using thermal_resistivity_d = dimensions< -1, -1, 3, 0, 1 >;
using torque_d = dimensions< 2, 1, -2 >;
using volume_d = dimensions< 3, 0, 0 >;
using volume_flow_rate_d = dimensions< 3, 0, -1 >;
using wave_number_d = dimensions< -1, 0, 0 >;
// Handy values.
constexpr Rep pi { Rep( 3.141592653589793238462L ) };
constexpr Rep percent { Rep( 1 ) / 100 };
//// Not approved for use alone, but needed for use with prefixes.
constexpr quantity< mass_d > gram { kilogram / 1000 };
// The derived SI units, as specified in SP811.
constexpr Rep radian { Rep( 1 ) };
constexpr Rep steradian { Rep( 1 ) };
constexpr quantity< force_d > newton { meter * kilogram / square( second ) };
constexpr quantity< pressure_d > pascal { newton / square( meter ) };
constexpr quantity< energy_d > joule { newton * meter };
constexpr quantity< power_d > watt { joule / second };
constexpr quantity< electric_charge_d > coulomb { second * ampere };
constexpr quantity< electric_potential_d > volt { watt / ampere };
constexpr quantity< capacitance_d > farad { coulomb / volt };
constexpr quantity< electric_resistance_d > ohm { volt / ampere };
constexpr quantity< electric_conductance_d > siemens { ampere / volt };
constexpr quantity< magnetic_flux_d > weber { volt * second };
constexpr quantity< magnetic_flux_density_d > tesla { weber / square( meter ) };
constexpr quantity< inductance_d > henry { weber / ampere };
constexpr quantity< thermodynamic_temperature_d > degree_celsius { kelvin };
constexpr quantity< luminous_flux_d > lumen { candela * steradian };
constexpr quantity< illuminance_d > lux { lumen / meter / meter };
constexpr quantity< activity_of_a_nuclide_d > becquerel { 1 / second };
constexpr quantity< absorbed_dose_d > gray { joule / kilogram };
constexpr quantity< dose_equivalent_d > sievert { joule / kilogram };
constexpr quantity< frequency_d > hertz { 1 / second };
// The rest of the units approved for use with SI, as specified in SP811.
// (However, use of these units is generally discouraged.)
constexpr quantity< length_d > angstrom { Rep( 1e-10L ) * meter };
constexpr quantity< area_d > are { Rep( 1e+2L ) * square( meter ) };
constexpr quantity< pressure_d > bar { Rep( 1e+5L ) * pascal };
constexpr quantity< area_d > barn { Rep( 1e-28L ) * square( meter ) };
constexpr quantity< activity_of_a_nuclide_d > curie { Rep( 3.7e+10L ) * becquerel };
constexpr quantity< time_interval_d > day { Rep( 86400L ) * second };
constexpr Rep degree_angle { pi / 180 };
constexpr quantity< acceleration_d > gal { Rep( 1e-2L ) * meter / square( second ) };
constexpr quantity< area_d > hectare { Rep( 1e+4L ) * square( meter ) };
constexpr quantity< time_interval_d > hour { Rep( 3600 ) * second };
constexpr quantity< speed_d > knot { Rep( 1852 ) / 3600 * meter / second };
constexpr quantity< volume_d > liter { Rep( 1e-3L ) * cube( meter ) };
constexpr quantity< time_interval_d > minute { Rep( 60 ) * second };
constexpr Rep minute_angle { pi / 10800 };
constexpr quantity< length_d > mile_nautical{ Rep( 1852 ) * meter };
constexpr quantity< absorbed_dose_d > rad { Rep( 1e-2L ) * gray };
constexpr quantity< dose_equivalent_d > rem { Rep( 1e-2L ) * sievert };
constexpr quantity< exposure_d > roentgen { Rep( 2.58e-4L ) * coulomb / kilogram };
constexpr Rep second_angle { pi / 648000L };
constexpr quantity< mass_d > ton_metric { Rep( 1e+3L ) * kilogram };
// Alternate (non-US) spellings:
constexpr quantity< length_d > metre { meter };
constexpr quantity< volume_d > litre { liter };
constexpr Rep deca { deka };
constexpr quantity< mass_d > tonne { ton_metric };
// cooked literals for base units;
// these could also have been created with a script.
#define QUANTITY_DEFINE_SCALING_LITERAL( sfx, dim, factor ) \
constexpr quantity<dim, double> operator "" _ ## sfx(unsigned long long x) \
{ \
return quantity<dim, double>( detail::magnitude_tag, factor * x ); \
} \
constexpr quantity<dim, double> operator "" _ ## sfx(long double x) \
{ \
return quantity<dim, double>( detail::magnitude_tag, factor * x ); \
}
#define QUANTITY_DEFINE_SCALING_LITERALS( pfx, dim, fact ) \
QUANTITY_DEFINE_SCALING_LITERAL( Y ## pfx, dim, fact * yotta ) \
QUANTITY_DEFINE_SCALING_LITERAL( Z ## pfx, dim, fact * zetta ) \
QUANTITY_DEFINE_SCALING_LITERAL( E ## pfx, dim, fact * exa ) \
QUANTITY_DEFINE_SCALING_LITERAL( P ## pfx, dim, fact * peta ) \
QUANTITY_DEFINE_SCALING_LITERAL( T ## pfx, dim, fact * tera ) \
QUANTITY_DEFINE_SCALING_LITERAL( G ## pfx, dim, fact * giga ) \
QUANTITY_DEFINE_SCALING_LITERAL( M ## pfx, dim, fact * mega ) \
QUANTITY_DEFINE_SCALING_LITERAL( k ## pfx, dim, fact * kilo ) \
QUANTITY_DEFINE_SCALING_LITERAL( h ## pfx, dim, fact * hecto ) \
QUANTITY_DEFINE_SCALING_LITERAL( da## pfx, dim, fact * deka ) \
QUANTITY_DEFINE_SCALING_LITERAL( pfx, dim, fact * 1 ) \
QUANTITY_DEFINE_SCALING_LITERAL( d ## pfx, dim, fact * deci ) \
QUANTITY_DEFINE_SCALING_LITERAL( c ## pfx, dim, fact * centi ) \
QUANTITY_DEFINE_SCALING_LITERAL( m ## pfx, dim, fact * milli ) \
QUANTITY_DEFINE_SCALING_LITERAL( u ## pfx, dim, fact * micro ) \
QUANTITY_DEFINE_SCALING_LITERAL( n ## pfx, dim, fact * nano ) \
QUANTITY_DEFINE_SCALING_LITERAL( p ## pfx, dim, fact * pico ) \
QUANTITY_DEFINE_SCALING_LITERAL( f ## pfx, dim, fact * femto ) \
QUANTITY_DEFINE_SCALING_LITERAL( a ## pfx, dim, fact * atto ) \
QUANTITY_DEFINE_SCALING_LITERAL( z ## pfx, dim, fact * zepto ) \
QUANTITY_DEFINE_SCALING_LITERAL( y ## pfx, dim, fact * yocto )
#define QUANTITY_DEFINE_LITERALS( pfx, dim ) \
QUANTITY_DEFINE_SCALING_LITERALS( pfx, dim, 1 )
/// literals
namespace literals {
QUANTITY_DEFINE_SCALING_LITERALS( g, mass_d, 1e-3 )
QUANTITY_DEFINE_LITERALS( m , length_d )
QUANTITY_DEFINE_LITERALS( s , time_interval_d )
QUANTITY_DEFINE_LITERALS( A , electric_current_d )
QUANTITY_DEFINE_LITERALS( K , thermodynamic_temperature_d )
QUANTITY_DEFINE_LITERALS( mol, amount_of_substance_d )
QUANTITY_DEFINE_LITERALS( cd , luminous_intensity_d )
} // namespace literals
}} // namespace phys::units
#endif // PHYS_UNITS_QUANTITY_HPP_INCLUDED
/*
* end of file
*/
/**
* \file quantity_io_symbols.hpp
*
* \brief load all available unit names and symbols.
* \author Martin Moene
* \date 7 September 2013
* \since 1.0
*
* Copyright 2013 Universiteit Leiden. All rights reserved.
* This code is provided as-is, with no warrantee of correctness.
*
* Distributed under the Boost Software License, Version 1.0. (See accompanying
* file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
*/
#ifndef PHYS_UNITS_QUANTITY_IO_SYMBOLS_HPP_INCLUDED
#define PHYS_UNITS_QUANTITY_IO_SYMBOLS_HPP_INCLUDED
#include "phys/units/quantity_io_ampere.hpp"
//prefer Hertz
//#include "phys/units/quantity_io_becquerel.hpp"
#include "phys/units/quantity_io_candela.hpp"
//prefer kelvin
//#include "phys/units/quantity_io_celsius.hpp"
#include "phys/units/quantity_io_coulomb.hpp"
#include "phys/units/quantity_io_dimensionless.hpp"
#include "phys/units/quantity_io_farad.hpp"
//prefer sievert
//#include "phys/units/quantity_io_gray.hpp"
#include "phys/units/quantity_io_joule.hpp"
#include "phys/units/quantity_io_henry.hpp"
#include "phys/units/quantity_io_hertz.hpp"
#include "phys/units/quantity_io_kelvin.hpp"
#include "phys/units/quantity_io_kilogram.hpp"
//prefer Cd base unit
//#include "phys/units/quantity_io_lumen.hpp"
#include "phys/units/quantity_io_lux.hpp"
#include "phys/units/quantity_io_meter.hpp"
#include "phys/units/quantity_io_newton.hpp"
#include "phys/units/quantity_io_ohm.hpp"
#include "phys/units/quantity_io_pascal.hpp"
#include "phys/units/quantity_io_radian.hpp"
#include "phys/units/quantity_io_second.hpp"
#include "phys/units/quantity_io_siemens.hpp"
#include "phys/units/quantity_io_sievert.hpp"
#include "phys/units/quantity_io_speed.hpp"
#include "phys/units/quantity_io_steradian.hpp"
#include "phys/units/quantity_io_tesla.hpp"
#include "phys/units/quantity_io_volt.hpp"
#include "phys/units/quantity_io_watt.hpp"
#include "phys/units/quantity_io_weber.hpp"
#endif // PHYS_UNITS_QUANTITY_IO_SYMBOLS_HPP_INCLUDED
/*
* end of file
*/
#!/usr/bin/env python3
import sys, math, itertools, re, csv, pprint
import xml.etree.ElementTree as ET
from collections import OrderedDict
# for testing
def sib_to_pdg(sib_id): # adapted from sibyll2.3c.f
idmap = [22,-11,11,-13,13,111,211,-211,321,-321,
130,310,2212,2112,12,-12,14,-14,-2212,-2112,
311,-311,221,331,213,-213,113,323,-323,313,
-313,223,333,3222,3212,3112,3322,3312,3122,2224,
2214,2114,1114,3224,3214,3114,3324,3314,3334,0,
202212,202112,212212,212112,0,0,0,0,411,-411,
900111,900211,-900211,0,0,0,0,0,0,0,
421,-421,441,431,-431,433,-433,413,-413,423,
-423,0,443,4222,4212,4112,4232,4132,4122,-15,
15,-16,16,4224,4214,4114,4324,4314,4332]
pida = abs(sib_id)
if pida != 0:
ISIB_PID2PDG = idmap[pida-1]
else:
return 0
isign = lambda a, b: abs(a) if b > 0 else -abs(a)
if sib_id < 0:
ISIB_PID2PDG = isign(ISIB_PID2PDG,sib_id)
return ISIB_PID2PDG
def parse(filename):
tree = ET.parse(filename)
root = tree.getroot()
for particle in root.iter("particle"):
name = particle.attrib["name"]
pdg_id = int(particle.attrib["id"])
mass = float(particle.attrib["m0"]) # GeV
electric_charge = int(particle.attrib["chargeType"]) # in units of e/3
decay_width = float(particle.attrib.get("mWidth", 0)) # GeV
lifetime = float(particle.attrib.get("tau0", math.inf)) # mm / c
yield (pdg_id, name, mass, electric_charge)
# TODO: read decay channels from child elements
if "antiName" in particle.attrib:
name = particle.attrib['antiName']
yield (-pdg_id, name, mass, -electric_charge)
def c_identifier(name):
orig = name
name = name.upper()
for c in "() ":
name = name.replace(c, "_")
name = name.replace("BAR", "_BAR")
name = name.replace("0", "_0")
name = name.replace("/", "_")
name = name.replace("*", "_STAR")
name = name.replace("'", "_PRIME")
name = name.replace("+", "_PLUS")
name = name.replace("-", "_MINUS")
while True:
tmp = name.replace("__", "_")
if tmp == name:
break
else:
name = tmp
pattern = re.compile(r'^[A-Z_][A-Z_0-9]*$')
if pattern.match(name):
return name.strip("_")
else:
raise Exception("could not generate C identifier for '{:s}'".format(orig))
def build_pythia_db(filename):
particle_db = OrderedDict()
for (pdg, name, mass, electric_charge) in parse(filename):
c_id = c_identifier(name)
#~ print(name, c_id, sep='\t', file=sys.stderr)
#~ enums += "{:s} = {:d}, ".format(c_id, corsika_id)
particle_db[c_id] = {
"name" : name,
"pdg" : pdg,
"mass" : mass, # in GeV
"electric_charge" : electric_charge # in e/3
}
return particle_db
def read_sibyll(filename):
with open(filename, "rt", newline='') as file:
reader = csv.reader(file, delimiter=' ')
for c_id, sib_code in reader:
yield (c_id, {"sibyll" : int(sib_code)})
def gen_convert_sib_int(pythia_db):
min_sib = min((pythia_db[p]['sibyll'] for p in pythia_db if "sibyll" in pythia_db[p]))
max_sib = max((pythia_db[p]['sibyll'] for p in pythia_db if "sibyll" in pythia_db[p]))
table_size = max_sib - min_sib + 1
map_sib_int = [None] * table_size
for p in filter(lambda _: True, pythia_db.values()):
map_sib_int[min_sib + p['sibyll']] = (p['ngc_code'], p["name"])
string = ("constexpr int8_t min_sib = {min_sib:d};\n"
"\n"
"constexpr std::array<int16_t, {size:d}> map_sibyll_internal = {{{{\n").format(size = table_size, min_sib = min_sib)
for val in map_sib_int:
internal, name = (*val,) if val else (0, "UNUSED")
string += " {code:d}, // {name:s}\n".format(code = internal, name = name)
string += "}};\n"
return string
def gen_sibyll_enum(pythia_db):
string = "enum class SibyllParticleCode : int8_t {\n"
for k in filter(lambda k: "sibyll" in pythia_db[k], pythia_db):
string += " {key:s} = {sib:d},\n".format(key = k, sib = pythia_db[k]['sibyll'])
string += "};"
return string
def gen_convert_int_sib(pythia_db):
map_int_sib_size = len(pythia_db)
map_int_sib = [None] * map_int_sib_size
for p in pythia_db:
map_int_sib[pythia_db[p]['ngc_code']] = pythia_db[p]['sibyll'] if "sibyll" in pythia_db[p] else 0
map_int_sib_table = "constexpr std::array<int8_t, {size:d}> map_internal_sibyll = {{{{".format(size = len(map_int_sib))
for k, p in zip(map_int_sib, pythia_db.values()):
map_int_sib_table += " {:d}, // {:s}\n".format(k, p['name'])
map_int_sib_table += "}};"
return map_int_sib_table
def gen_internal_enum(pythia_db):
string = "enum class InternalParticleCode : uint8_t {\n"
for k in filter(lambda k: "ngc_code" in pythia_db[k], pythia_db):
string += " {key:s} = {sib:d},\n".format(key = k, sib = pythia_db[k]['ngc_code'])
string += "};"
return string
def gen_properties(pythia_db):
# masses
string = "static constexpr std::size_t size = {size:d};\n".format(size = len(pythia_db))
string += "static constexpr std::array<double const, size> masses{{\n"
for p in pythia_db.values():
string += " {mass:f}, // {name:s}\n".format(mass = p['mass'], name = p['name'])
string += ("}};\n"
# PDG codes
"static constexpr std::array<PDGCode const, size> pdg_codes{{\n")
for p in pythia_db.values():
string += " {pdg:d}, // {name:s}\n".format(pdg = p['pdg'], name = p['name'])
string += ("}};\n"
# name strings
"static const std::array<std::string const, size> names{{\n")
for p in pythia_db.values():
string += " \"{name:s}\",\n".format(name = p['name'])
string += ("}};\n"
# electric charges
"static constexpr std::array<int16_t, size> electric_charges{{\n")
for p in pythia_db.values():
string += " \"{charge:d}\",\n".format(charge = p['electric_charge'])
return string
if __name__ == "__main__":
pythia_db = build_pythia_db("Tools/ParticleData.xml")
for c_id, sib_info in read_sibyll("sibyll_codes.dat"):
#~ print(c_id, sib_info)
pythia_db[c_id] = {**pythia_db[c_id], **sib_info}
counter = itertools.count(0)
not_modeled = []
for p in pythia_db:
if 'sibyll' not in pythia_db[p]:
not_modeled += [p]
else:
pythia_db[p]['ngc_code'] = next(counter)
#~ print(not_modeled)
for p in not_modeled:
pythia_db.pop(p, None)
#~ # cross check hand-written tables vs sibyll's conversion
#~ for p in pythia_db:
#~ sib_db = pythia_db[p]['sibyll']
#~ pdg = pythia_db[p]['pdg']
#~ table = sib_to_pdg(sib_db)
#~ if table != pdg:
#~ raise Exception(p, sib_db, pdg, table)
with open("Framework/ParticleProperties/generated_particle_properties.inc", "w") as f:
print(gen_internal_enum(pythia_db), file=f)
print(gen_properties(pythia_db), file=f)
with open("generated_sibyll.inc", "w") as f:
print(gen_sibyll_enum(pythia_db), file=f)
print(gen_convert_sib_int(pythia_db), file=f)
print(gen_convert_int_sib(pythia_db), file=f)
#~ print(pdg_id_table, mass_table, name_table, enums, sep='\n\n')
E_MINUS 3
E_PLUS 2
NU_E 15
NU_E_BAR 16
MU_MINUS 5
MU_PLUS 4
NU_MU 17
NU_MU_BAR 18
TAU_MINUS 91
TAU_PLUS 90
NU_TAU 92
NU_TAU_BAR 93
GAMMA 1
PI_0 6
RHO_0 27
K_L_0 11
PI_PLUS 7
PI_MINUS 8
RHO_PLUS 25
RHO_MINUS 26
ETA 23
OMEGA 32
K_S_0 12
K_STAR_0 30
K_STAR_BAR_0 31
K_PLUS 9
K_MINUS 10
K_STAR_PLUS 28
K_STAR_MINUS 29
D_PLUS 59
D_MINUS 60
D_STAR_PLUS 78
D_STAR_MINUS 79
D_0 71
D_BAR_0 72
D_STAR_0 80
D_STAR_BAR_0 81
D_S_PLUS 74
D_S_MINUS 75
D_STAR_S_PLUS 76
D_STAR_S_MINUS 77
ETA_C 73
N_0 14
N_BAR_0 -14
DELTA_0 42
DELTA_BAR_0 -42
P_PLUS 13
P_BAR_MINUS -13
DELTA_PLUS 41
DELTA_BAR_MINUS -41
DELTA_PLUS_PLUS 40
DELTA_BAR_MINUS_MINUS -40
SIGMA_MINUS 36
SIGMA_BAR_PLUS -36
LAMBDA_0 39
LAMBDA_BAR_0 -39
SIGMA_0 35
SIGMA_BAR_0 -35
SIGMA_PLUS 34
SIGMA_BAR_MINUS -34
XI_MINUS 38
XI_BAR_PLUS -38
XI_0 37
XI_BAR_0 -37
OMEGA_MINUS 49
OMEGA_BAR_PLUS -49
SIGMA_C_0 86
SIGMA_C_BAR_0 -86
SIGMA_STAR_C_0 96
SIGMA_STAR_C_BAR_0 -96
LAMBDA_C_PLUS 89
LAMBDA_C_BAR_MINUS -89
XI_C_0 88
XI_C_BAR_0 -88
SIGMA_C_PLUS 85
SIGMA_C_BAR_MINUS -85
SIGMA_STAR_C_PLUS 95
SIGMA_STAR_C_BAR_MINUS -95
SIGMA_C_PLUS_PLUS 84
SIGMA_C_BAR_MINUS_MINUS -84
SIGMA_STAR_C_PLUS_PLUS 94
SIGMA_STAR_C_BAR_MINUS_MINUS -94
XI_C_PLUS 87
XI_C_BAR_MINUS -87
OMEGA_C_0 99
OMEGA_C_BAR_0 -99
J_PSI 83
VOID 0
# Usage and collaboration agreement
The CORSIKA 8 project very much welcomes all collaboration and
contributions. The aim of the project is to create a
scientific software framework as a fundamental tool for research.
The project consists of the contributions from the scientific
community and individuals in a best effort to deliver the best
possible performance and physics output.
## The software license of the CORSIKA project
CORSIKA 8 is by default released under the BSD 3-Clause License, as copied in full in the file
[LICENSE](LICENSE). Each source file of the CORSIKA project contains a
short statement of the copyright and this license. Each binary or
source code release of CORSIKA contains the file LICENSE.
The code, documentation and content in the folder [./externals](./externals)
is not integral part of the CORSIKA project and can be based on, or
include, other licenses, which must be compatible with the CORSIKA 8 license.
The folder [./modules](./modules) contains the code of several
external physics models for your convenience. They each come with
their own license which we ask you to honor. Please also make sure to cite the
adequate reference papers when using their models in scientific work
and publications.
Of course, we have the authors' consent to
distribute their code together with CORSIKA 8.
Check the content of these folders carefully for details and additional
license information. It depends on the configuration of
the build system to what extent this code is used to build CORSIKA.
## Contributing
If you want to contribute, you need to read
[the contributing GUIDELINES](CONTRIBUTING.md) and comply with these rules, or help to
improve them.
## General guidelines
We reproduce below some guidelines copied from http://www.montecarlonet.org/ that we also ask you to follow in the spirit of academic collaboration.
1) The integrity of the program should be respected.
-------------------------------------------------
1.1) Suspected bugs and proposed fixes should be reported back to the
original authors to be considered for inclusion in the standard
distribution. No independently developed and maintained forks
should be created as long as the original authors actively work on
the program.
1.2) The program should normally be redistributed in its entirety.
When there are special reasons, an agreement should be sought with
the original authors to redistribute only specific parts. This
should be arranged such that the redistributed parts remain
updated in step with the standard distribution.
1.3) Any changes in the code must be clearly marked in the source
(reason, author, date) and documented. If any modified version is
redistributed it should be stated at the point of distribution
(download link) that it has been modified and why.
1.4) If a significant part of the code is used by another program,
this should be clearly specified in that program's documentation and
stated at its point of distribution.
1.5) Copyright information and references may not be removed.
Copyright-related program messages may not be altered and must be
printed even if only a part of the program is used. Adding further
messages specifying any modifications is encouraged.
2) The program and its physics should be properly cited when used for
academic publications
------------------------------------------------------------------
2.1) The main software reference as designated by the program authors
should always be cited.
2.2) In addition, the original literature on which the program is based
should be cited to the extent that it is of relevance for a study,
applying the same threshold criteria as for other literature.
2.3) When several programs are combined, they should all be mentioned,
commensurate with their importance for the physics study at hand.
2.4) To make published results reproducible, the exact versions of the
codes that were used and any relevant program and parameter
modifications should be spelled out.
(These guidelines were originally edited by Nils Lavesson and David Grellscheid
for the MCnet collaboration, which has approved and agreed to respect
them. MCnet is a Marie Curie Research Training Network funded under
Framework Programme 6 contract MRTN-CT-2006-035606.)
SET(CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_PREFIX}/lib")
add_executable (c8_air_shower c8_air_shower.cpp)
target_link_libraries (c8_air_shower CORSIKA8)
if(WITH_FLUKA)
message("compiling c8_air_shower.cpp with FLUKA")
target_compile_definitions(c8_air_shower PRIVATE WITH_FLUKA)
else()
message("compiling c8_air_shower.cpp with UrQMD")
endif()
install (
TARGETS c8_air_shower DESTINATION bin
)
# CORSIKA 8 Applications
This directory contains standard applications which are typical for astroparticle physics solutions.
They are "physics-complete" and are suitable for generating simulations that can be used in publications.
For example, `c8_air_shower.cpp` would be a similar binary to what would be built by CORSIKA 7 and will simulate
air showers in a curved atmosphere.
/*
* (c) Copyright 2018 CORSIKA Project, corsika-project@lists.kit.edu
*
* This software is distributed under the terms of the 3-clause BSD license.
* See file LICENSE for a full version of the license.
*/
/* clang-format off */
// InteractionCounter used boost/histogram, which
// fails if boost/type_traits have been included before. Thus, we have
// to include it first...
#include <corsika/framework/process/InteractionCounter.hpp>
/* clang-format on */
#include <corsika/framework/core/Cascade.hpp>
#include <corsika/framework/core/EnergyMomentumOperations.hpp>
#include <corsika/framework/core/Logging.hpp>
#include <corsika/framework/core/PhysicalUnits.hpp>
#include <corsika/framework/geometry/PhysicalGeometry.hpp>
#include <corsika/framework/geometry/Plane.hpp>
#include <corsika/framework/geometry/Sphere.hpp>
#include <corsika/framework/process/DynamicInteractionProcess.hpp>
#include <corsika/framework/process/ProcessSequence.hpp>
#include <corsika/framework/process/SwitchProcessSequence.hpp>
#include <corsika/framework/random/RNGManager.hpp>
#include <corsika/framework/random/PowerLawDistribution.hpp>
#include <corsika/framework/utility/CorsikaFenv.hpp>
#include <corsika/framework/utility/SaveBoostHistogram.hpp>
#include <corsika/modules/writers/EnergyLossWriter.hpp>
#include <corsika/modules/writers/InteractionWriter.hpp>
#include <corsika/modules/writers/LongitudinalWriter.hpp>
#include <corsika/modules/writers/PrimaryWriter.hpp>
#include <corsika/modules/writers/SubWriter.hpp>
#include <corsika/output/OutputManager.hpp>
#include <corsika/media/CORSIKA7Atmospheres.hpp>
#include <corsika/media/Environment.hpp>
#include <corsika/media/GeomagneticModel.hpp>
#include <corsika/media/GladstoneDaleRefractiveIndex.hpp>
#include <corsika/media/HomogeneousMedium.hpp>
#include <corsika/media/IMagneticFieldModel.hpp>
#include <corsika/media/LayeredSphericalAtmosphereBuilder.hpp>
#include <corsika/media/MediumPropertyModel.hpp>
#include <corsika/media/NuclearComposition.hpp>
#include <corsika/media/ShowerAxis.hpp>
#include <corsika/media/UniformMagneticField.hpp>
#include <corsika/modules/BetheBlochPDG.hpp>
#include <corsika/modules/Epos.hpp>
#include <corsika/modules/LongitudinalProfile.hpp>
#include <corsika/modules/ObservationPlane.hpp>
#include <corsika/modules/PROPOSAL.hpp>
#include <corsika/modules/ParticleCut.hpp>
#include <corsika/modules/Pythia8.hpp>
#include <corsika/modules/QGSJetII.hpp>
#include <corsika/modules/Sibyll.hpp>
#include <corsika/modules/Sophia.hpp>
#include <corsika/modules/StackInspector.hpp>
#include <corsika/modules/thinning/EMThinning.hpp>
// for ICRC2023
#ifdef WITH_FLUKA
#include <corsika/modules/FLUKA.hpp>
#else
#include <corsika/modules/UrQMD.hpp>
#endif
#include <corsika/modules/TAUOLA.hpp>
#include <corsika/modules/radio/CoREAS.hpp>
#include <corsika/modules/radio/RadioProcess.hpp>
#include <corsika/modules/radio/ZHS.hpp>
#include <corsika/modules/radio/observers/Observer.hpp>
#include <corsika/modules/radio/observers/TimeDomainObserver.hpp>
#include <corsika/modules/radio/detectors/ObserverCollection.hpp>
#include <corsika/modules/radio/propagators/TabulatedFlatAtmospherePropagator.hpp>
#include <corsika/setup/SetupStack.hpp>
#include <corsika/setup/SetupTrajectory.hpp>
#include <corsika/setup/SetupC7trackedParticles.hpp>
#include <boost/filesystem.hpp>
#include <CLI/App.hpp>
#include <CLI/Config.hpp>
#include <CLI/Formatter.hpp>
#include <cstdlib>
#include <iomanip>
#include <limits>
#include <string>
using namespace corsika;
using namespace std;
using EnvironmentInterface =
IRefractiveIndexModel<IMediumPropertyModel<IMagneticFieldModel<IMediumModel>>>;
using EnvType = Environment<EnvironmentInterface>;
using StackType = setup::Stack<EnvType>;
using TrackingType = setup::Tracking;
using Particle = StackType::particle_type;
//
// This is the main example script which runs EAS with fairly standard settings
// w.r.t. what was implemented in CORSIKA 7. Users may want to change some of the
// specifics (observation altitude, magnetic field, energy cuts, etc.), but this
// example is the most physics-complete one and should be used for full simulations
// of particle cascades in air
//
long registerRandomStreams(long seed) {
RNGManager<>::getInstance().registerRandomStream("cascade");
RNGManager<>::getInstance().registerRandomStream("qgsjet");
RNGManager<>::getInstance().registerRandomStream("sibyll");
RNGManager<>::getInstance().registerRandomStream("sophia");
RNGManager<>::getInstance().registerRandomStream("epos");
RNGManager<>::getInstance().registerRandomStream("pythia");
RNGManager<>::getInstance().registerRandomStream("urqmd");
RNGManager<>::getInstance().registerRandomStream("fluka");
RNGManager<>::getInstance().registerRandomStream("proposal");
RNGManager<>::getInstance().registerRandomStream("thinning");
RNGManager<>::getInstance().registerRandomStream("primary_particle");
if (seed == 0) {
std::random_device rd;
seed = rd();
CORSIKA_LOG_INFO("random seed (auto) {}", seed);
} else {
CORSIKA_LOG_INFO("random seed {}", seed);
}
RNGManager<>::getInstance().setSeed(seed);
return seed;
}
template <typename T>
using MyExtraEnv =
GladstoneDaleRefractiveIndex<MediumPropertyModel<UniformMagneticField<T>>>;
int main(int argc, char** argv) {
// the main command line description
CLI::App app{"Simulate standard (downgoing) showers with CORSIKA 8."};
CORSIKA_LOG_INFO(
"Please cite the following papers when using CORSIKA 8:\n"
" - \"Towards a Next Generation of CORSIKA: A Framework for the Simulation of "
"Particle Cascades in Astroparticle Physics\", Comput. Softw. Big Sci. 3 (2019) "
"2, https://doi.org/10.1007/s41781-018-0013-0\n"
" - \"Simulating radio emission from particle cascades with CORSIKA 8\", "
"Astropart. Phys. 166 (2025) 103072, "
"https://doi.org/10.1016/j.astropartphys.2024.103072");
//////// Primary options ////////
// some options that we want to fill in
int A, Z, nevent = 0;
std::vector<double> cli_energy_range;
// the following section adds the options to the parser
// we start by definining a sub-group for the primary ID
auto opt_Z = app.add_option("-Z", Z, "Atomic number for primary")
->check(CLI::Range(0, 26))
->group("Primary");
auto opt_A = app.add_option("-A", A, "Atomic mass number for primary")
->needs(opt_Z)
->check(CLI::Range(1, 58))
->group("Primary");
app.add_option("-p,--pdg",
"PDG code for primary (p=2212, gamma=22, e-=11, nu_e=12, mu-=13, "
"nu_mu=14, tau=15, nu_tau=16).")
->excludes(opt_A)
->excludes(opt_Z)
->group("Primary");
app.add_option("-E,--energy", "Primary energy in GeV")->default_val(0);
app.add_option("--energy_range", cli_energy_range,
"Low and high values that define the range of the primary energy in GeV")
->expected(2)
->check(CLI::PositiveNumber)
->group("Primary");
app.add_option("--eslope", "Spectral index for sampling energies, dN/dE = E^eSlope")
->default_val(-1.0)
->group("Primary");
app.add_option("-z,--zenith", "Primary zenith angle (deg)")
->default_val(0.)
->check(CLI::Range(0., 90.))
->group("Primary");
app.add_option("-a,--azimuth", "Primary azimuth angle (deg)")
->default_val(0.)
->check(CLI::Range(0., 360.))
->group("Primary");
//////// Config options ////////
app.add_option("--emcut",
"Min. kin. energy of photons, electrons and "
"positrons in tracking (GeV)")
->default_val(0.5e-3)
->check(CLI::Range(0.000001, 1.e13))
->group("Config");
app.add_option("--hadcut", "Min. kin. energy of hadrons in tracking (GeV)")
->default_val(0.3)
->check(CLI::Range(0.02, 1.e13))
->group("Config");
app.add_option("--mucut", "Min. kin. energy of muons in tracking (GeV)")
->default_val(0.3)
->check(CLI::Range(0.000001, 1.e13))
->group("Config");
app.add_option("--taucut", "Min. kin. energy of tau leptons in tracking (GeV)")
->default_val(0.3)
->check(CLI::Range(0.000001, 1.e13))
->group("Config");
app.add_option("--max-deflection-angle",
"maximal deflection angle in tracking in radians")
->default_val(0.2)
->check(CLI::Range(1.e-8, 1.))
->group("Config");
bool track_neutrinos = false;
app.add_flag("--track-neutrinos", track_neutrinos, "switch on tracking of neutrinos")
->group("Config");
//////// Misc options ////////
app.add_option("--neutrino-interaction-type",
"charged (CC) or neutral current (NC) or both")
->default_val("both")
->check(CLI::IsMember({"neutral", "NC", "charged", "CC", "both"}))
->group("Misc.");
app.add_option("--observation-level",
"Height above earth radius of the observation level (in m)")
->default_val(0.)
->check(CLI::Range(-1.e3, 1.e5))
->group("Config");
app.add_option("--injection-height",
"Height above earth radius of the injection point (in m)")
->default_val(112.75e3)
->check(CLI::Range(-1.e3, 1.e6))
->group("Config");
app.add_option("-N,--nevent", nevent, "The number of events/showers to run.")
->default_val(1)
->check(CLI::PositiveNumber)
->group("Library/Output");
app.add_option("-f,--filename", "Filename for output library.")
->required()
->default_val("corsika_library")
->check(CLI::NonexistentPath)
->group("Library/Output");
bool compressOutput = false;
app.add_flag("--compress", compressOutput, "Compress the output directory to a tarball")
->group("Library/Output");
app.add_option("-s,--seed", "The random number seed.")
->default_val(0)
->check(CLI::NonNegativeNumber)
->group("Misc.");
bool force_interaction = false;
app.add_flag("--force-interaction", force_interaction,
"Force the location of the first interaction.")
->group("Misc.");
bool force_decay = false;
app.add_flag("--force-decay", force_decay, "Force the primary to immediately decay")
->group("Misc.");
bool disable_interaction_hists = false;
app.add_flag("--disable-interaction-histograms", disable_interaction_hists,
"Store interaction histograms")
->group("Misc.");
app.add_option("-v,--verbosity", "Verbosity level: warn, info, debug, trace.")
->default_val("info")
->check(CLI::IsMember({"warn", "info", "debug", "trace"}))
->group("Misc.");
app.add_option("-M,--hadronModel", "High-energy hadronic interaction model")
->default_val("SIBYLL-2.3d")
->check(CLI::IsMember({"SIBYLL-2.3d", "QGSJet-II.04", "EPOS-LHC", "Pythia8"}))
->group("Misc.");
app.add_option("-T,--hadronModelTransitionEnergy",
"Transition between high-/low-energy hadronic interaction "
"model in GeV")
->default_val(std::pow(10, 1.9)) // 79.4 GeV
->check(CLI::NonNegativeNumber)
->group("Misc.");
//////// Thinning options ////////
app.add_option("--emthin",
"fraction of primary energy at which thinning of EM particles starts")
->default_val(1.e-6)
->check(CLI::Range(0., 1.))
->group("Thinning");
app.add_option("--max-weight",
"maximum weight for thinning of EM particles (0 to select Kobal's "
"optimum times 0.5)")
->default_val(0)
->check(CLI::NonNegativeNumber)
->group("Thinning");
bool multithin = false;
app.add_flag("--multithin", multithin, "keep thinned particles (with weight=0)")
->group("Thinning");
app.add_option("--ring", "concentric ring of star shape pattern of observers")
->default_val(0)
->check(CLI::Range(0, 20))
->group("Radio");
// parse the command line options into the variables
CLI11_PARSE(app, argc, argv);
if (app.count("--verbosity")) {
auto const loglevel = app["--verbosity"]->as<std::string>();
if (loglevel == "warn") {
logging::set_level(logging::level::warn);
} else if (loglevel == "info") {
logging::set_level(logging::level::info);
} else if (loglevel == "debug") {
logging::set_level(logging::level::debug);
} else if (loglevel == "trace") {
#ifndef _C8_DEBUG_
CORSIKA_LOG_ERROR("trace log level requires a Debug build.");
return 1;
#endif
logging::set_level(logging::level::trace);
}
}
// check that we got either PDG or A/Z
// this can be done with option_groups but the ordering
// gets all messed up
if (app.count("--pdg") == 0) {
if ((app.count("-A") == 0) || (app.count("-Z") == 0)) {
CORSIKA_LOG_ERROR("If --pdg is not provided, then both -A and -Z are required.");
return 1;
}
}
// initialize random number sequence(s)
auto seed = registerRandomStreams(app["--seed"]->as<long>());
/* === START: SETUP ENVIRONMENT AND ROOT COORDINATE SYSTEM === */
EnvType env;
CoordinateSystemPtr const& rootCS = env.getCoordinateSystem();
Point const center{rootCS, 0_m, 0_m, 0_m};
Point const surface_{rootCS, 0_m, 0_m, constants::EarthRadius::Mean};
GeomagneticModel wmm(center, corsika_data("GeoMag/WMM.COF"));
// build an atmosphere with Keilhauer's parametrization of the
// US standard atmosphere into `env`
create_5layer_atmosphere<EnvironmentInterface, MyExtraEnv>(
env, AtmosphereId::USStdBK, center, 1.000327, surface_, Medium::AirDry1Atm,
MagneticFieldVector{rootCS, 50_uT, 0_T, 0_T});
/* === END: SETUP ENVIRONMENT AND ROOT COORDINATE SYSTEM === */
/* === START: CONSTRUCT PRIMARY PARTICLE === */
// parse the primary ID as a PDG or A/Z code
Code beamCode;
// check if we want to use a PDG code instead
if (app.count("--pdg") > 0) {
beamCode = convert_from_PDG(PDGCode(app["--pdg"]->as<int>()));
} else {
// check manually for proton and neutrons
if ((A == 1) && (Z == 1))
beamCode = Code::Proton;
else if ((A == 1) && (Z == 0))
beamCode = Code::Neutron;
else
beamCode = get_nucleus_code(A, Z);
}
HEPEnergyType eMin = 0_GeV;
HEPEnergyType eMax = 0_GeV;
// check the particle energy parameters
if (app["--energy"]->as<double>() > 0.0) {
eMin = app["--energy"]->as<double>() * 1_GeV;
eMax = app["--energy"]->as<double>() * 1_GeV;
} else if (cli_energy_range.size()) {
if (cli_energy_range[0] > cli_energy_range[1]) {
CORSIKA_LOG_WARN(
"Energy range lower bound is greater than upper bound. swapping...");
eMin = cli_energy_range[1] * 1_GeV;
eMax = cli_energy_range[0] * 1_GeV;
} else {
eMin = cli_energy_range[0] * 1_GeV;
eMax = cli_energy_range[1] * 1_GeV;
}
} else {
CORSIKA_LOG_CRITICAL(
"Must set either the (--energy) flag or the (--energy_range) flag to "
"positive value(s)");
return 0;
}
// direction of the shower in (theta, phi) space
auto const thetaRad = app["--zenith"]->as<double>() / 180. * M_PI;
auto const phiRad = app["--azimuth"]->as<double>() / 180. * M_PI;
auto const [nx, ny, nz] = std::make_tuple(sin(thetaRad) * cos(phiRad),
sin(thetaRad) * sin(phiRad), -cos(thetaRad));
auto propDir = DirectionVector(rootCS, {nx, ny, nz});
/* === END: CONSTRUCT PRIMARY PARTICLE === */
/* === START: CONSTRUCT GEOMETRY === */
auto const observationHeight =
app["--observation-level"]->as<double>() * 1_m + constants::EarthRadius::Mean;
auto const injectionHeight =
app["--injection-height"]->as<double>() * 1_m + constants::EarthRadius::Mean;
auto const t = -observationHeight * cos(thetaRad) +
sqrt(-static_pow<2>(sin(thetaRad) * observationHeight) +
static_pow<2>(injectionHeight));
Point const showerCore{rootCS, 0_m, 0_m, observationHeight};
Point const injectionPos =
showerCore + DirectionVector{rootCS,
{-sin(thetaRad) * cos(phiRad),
-sin(thetaRad) * sin(phiRad), cos(thetaRad)}} *
t;
// we make the axis much longer than the inj-core distance since the
// profile will go beyond the core, depending on zenith angle
ShowerAxis const showerAxis{injectionPos, (showerCore - injectionPos) * 1.2, env};
auto const dX = 10_g / square(1_cm); // Binning of the writers along the shower axis
/* === END: CONSTRUCT GEOMETRY === */
std::stringstream args;
for (int i = 0; i < argc; ++i) { args << argv[i] << " "; }
// create the output manager that we then register outputs with
OutputManager output(app["--filename"]->as<std::string>(), seed, args.str(),
compressOutput);
// register energy losses as output
EnergyLossWriter dEdX{showerAxis, dX};
output.add("energyloss", dEdX);
DynamicInteractionProcess<StackType> heModel;
auto const all_elements = corsika::get_all_elements_in_universe(env);
// have SIBYLL always for PROPOSAL photo-hadronic interactions
auto sibyll = std::make_shared<corsika::sibyll::Interaction>(
all_elements, corsika::setup::C7trackedParticles);
if (auto const modelStr = app["--hadronModel"]->as<std::string>();
modelStr == "SIBYLL-2.3d") {
heModel = DynamicInteractionProcess<StackType>{sibyll};
} else if (modelStr == "QGSJet-II.04") {
heModel = DynamicInteractionProcess<StackType>{
std::make_shared<corsika::qgsjetII::Interaction>()};
} else if (modelStr == "EPOS-LHC") {
heModel = DynamicInteractionProcess<StackType>{
std::make_shared<corsika::epos::Interaction>(corsika::setup::C7trackedParticles)};
} else if (modelStr == "Pythia8") {
heModel = DynamicInteractionProcess<StackType>{
std::make_shared<corsika::pythia8::Interaction>(
corsika::setup::C7trackedParticles)};
} else {
CORSIKA_LOG_CRITICAL("invalid choice \"{}\"; also check argument parser", modelStr);
return EXIT_FAILURE;
}
InteractionCounter heCounted{heModel};
corsika::pythia8::Decay decayPythia;
// tau decay via TAUOLA (hard coded to left handed)
corsika::tauola::Decay decayTauola(corsika::tauola::Helicity::LeftHanded);
struct IsTauSwitch {
bool operator()(const Particle& p) const {
return (p.getPID() == Code::TauMinus || p.getPID() == Code::TauPlus);
}
};
auto decaySequence = make_select(IsTauSwitch(), decayTauola, decayPythia);
// neutrino interactions with pythia (options are: NC, CC)
bool NC = false;
bool CC = false;
if (auto const nuIntStr = app["--neutrino-interaction-type"]->as<std::string>();
nuIntStr == "neutral" || nuIntStr == "NC") {
NC = true;
CC = false;
} else if (nuIntStr == "charged" || nuIntStr == "CC") {
NC = false;
CC = true;
} else if (nuIntStr == "both") {
NC = true;
CC = true;
}
corsika::pythia8::NeutrinoInteraction neutrinoPrimaryPythia(
corsika::setup::C7trackedParticles, NC, CC);
// hadronic photon interactions in resonance region
corsika::sophia::InteractionModel sophia;
HEPEnergyType const emcut = 1_GeV * app["--emcut"]->as<double>();
HEPEnergyType const hadcut = 1_GeV * app["--hadcut"]->as<double>();
HEPEnergyType const mucut = 1_GeV * app["--mucut"]->as<double>();
HEPEnergyType const taucut = 1_GeV * app["--taucut"]->as<double>();
ParticleCut<SubWriter<decltype(dEdX)>> cut(emcut, emcut, hadcut, mucut, taucut,
!track_neutrinos, dEdX);
// tell proposal that we are interested in all energy losses above the particle cut
auto const prod_threshold = std::min({emcut, hadcut, mucut, taucut});
set_energy_production_threshold(Code::Electron, prod_threshold);
set_energy_production_threshold(Code::Positron, prod_threshold);
set_energy_production_threshold(Code::Photon, prod_threshold);
set_energy_production_threshold(Code::MuMinus, prod_threshold);
set_energy_production_threshold(Code::MuPlus, prod_threshold);
set_energy_production_threshold(Code::TauMinus, prod_threshold);
set_energy_production_threshold(Code::TauPlus, prod_threshold);
// energy threshold for high energy hadronic model. Affects LE/HE switch for
// hadron interactions and the hadronic photon model in proposal
HEPEnergyType const heHadronModelThreshold =
1_GeV * app["--hadronModelTransitionEnergy"]->as<double>();
corsika::proposal::Interaction emCascade(
env, sophia, sibyll->getHadronInteractionModel(), heHadronModelThreshold);
// use BetheBlochPDG for hadronic continuous losses, and proposal otherwise
corsika::proposal::ContinuousProcess<SubWriter<decltype(dEdX)>> emContinuousProposal(
env, dEdX);
BetheBlochPDG<SubWriter<decltype(dEdX)>> emContinuousBethe{dEdX};
struct EMHadronSwitch {
EMHadronSwitch() = default;
bool operator()(const Particle& p) const { return is_hadron(p.getPID()); }
};
auto emContinuous =
make_select(EMHadronSwitch(), emContinuousBethe, emContinuousProposal);
LongitudinalWriter profile{showerAxis, dX};
output.add("profile", profile);
LongitudinalProfile<SubWriter<decltype(profile)>> longprof{profile};
// for ICRC2023
#ifdef WITH_FLUKA
corsika::fluka::Interaction leIntModel{all_elements};
#else
corsika::urqmd::UrQMD leIntModel{};
#endif
InteractionCounter leIntCounted{leIntModel};
// assemble all processes into an ordered process list
struct EnergySwitch {
HEPEnergyType cutE_;
EnergySwitch(HEPEnergyType cutE)
: cutE_(cutE) {}
bool operator()(const Particle& p) const { return (p.getKineticEnergy() < cutE_); }
};
auto hadronSequence =
make_select(EnergySwitch(heHadronModelThreshold), leIntCounted, heCounted);
// observation plane
Plane const obsPlane(showerCore, DirectionVector(rootCS, {0., 0., 1.}));
ObservationPlane<TrackingType, ParticleWriterParquet> observationLevel{
obsPlane, DirectionVector(rootCS, {1., 0., 0.}),
true, // plane should "absorb" particles
false}; // do not print z-coordinate
// register ground particle output
output.add("particles", observationLevel);
PrimaryWriter<TrackingType, ParticleWriterParquet> primaryWriter(observationLevel);
output.add("primary", primaryWriter);
int ring_number{app["--ring"]->as<int>()};
auto const radius_{ring_number * 25_m};
const int rr_ = static_cast<int>(radius_ / 1_m);
// Radio observers and relevant information
// the observer time variables
const TimeType duration_{4e-7_s};
const InverseTimeType sampleRate_{1e+9_Hz};
// the observer collection for CoREAS and ZHS
ObserverCollection<TimeDomainObserver> detectorCoREAS;
ObserverCollection<TimeDomainObserver> detectorZHS;
auto const showerCoreX_{showerCore.getCoordinates().getX()};
auto const showerCoreY_{showerCore.getCoordinates().getY()};
auto const injectionPosX_{injectionPos.getCoordinates().getX()};
auto const injectionPosY_{injectionPos.getCoordinates().getY()};
auto const injectionPosZ_{injectionPos.getCoordinates().getZ()};
auto const triggerpoint_{Point(rootCS, injectionPosX_, injectionPosY_, injectionPosZ_)};
if (ring_number != 0) {
// setup CoREAS observers - use the for loop for star shape pattern
for (auto phi_1 = 0; phi_1 <= 315; phi_1 += 45) {
auto phiRad_1 = phi_1 / 180. * M_PI;
auto const point_1{Point(rootCS, showerCoreX_ + radius_ * cos(phiRad_1),
showerCoreY_ + radius_ * sin(phiRad_1),
constants::EarthRadius::Mean)};
std::cout << "Observer point CoREAS: " << point_1 << std::endl;
auto triggertime_1{(triggerpoint_ - point_1).getNorm() / constants::c};
std::string name_1 = "CoREAS_R=" + std::to_string(rr_) +
"_m--Phi=" + std::to_string(phi_1) + "degrees";
TimeDomainObserver observer_1(name_1, point_1, rootCS, triggertime_1, duration_,
sampleRate_, triggertime_1);
detectorCoREAS.addObserver(observer_1);
}
// setup ZHS observers - use the for loop for star shape pattern
for (auto phi_ = 0; phi_ <= 315; phi_ += 45) {
auto phiRad_ = phi_ / 180. * M_PI;
auto const point_{Point(rootCS, showerCoreX_ + radius_ * cos(phiRad_),
showerCoreY_ + radius_ * sin(phiRad_),
constants::EarthRadius::Mean)};
std::cout << "Observer point ZHS: " << point_ << std::endl;
auto triggertime_{(triggerpoint_ - point_).getNorm() / constants::c};
std::string name_ =
"ZHS_R=" + std::to_string(rr_) + "_m--Phi=" + std::to_string(phi_) + "degrees";
TimeDomainObserver observer_2(name_, point_, rootCS, triggertime_, duration_,
sampleRate_, triggertime_);
detectorZHS.addObserver(observer_2);
}
}
LengthType const step = 1_m;
auto TP =
make_tabulated_flat_atmosphere_radio_propagator(env, injectionPos, surface_, step);
// initiate CoREAS
RadioProcess<decltype(detectorCoREAS), CoREAS<decltype(detectorCoREAS), decltype(TP)>,
decltype(TP)>
coreas(detectorCoREAS, TP);
// register CoREAS with the output manager
output.add("CoREAS", coreas);
// initiate ZHS
RadioProcess<decltype(detectorZHS), ZHS<decltype(detectorZHS), decltype(TP)>,
decltype(TP)>
zhs(detectorZHS, TP);
// register ZHS with the output manager
output.add("ZHS", zhs);
// make and register the first interaction writer
InteractionWriter<setup::Tracking, ParticleWriterParquet> inter_writer(
showerAxis, observationLevel);
output.add("interactions", inter_writer);
/* === END: SETUP PROCESS LIST === */
// trigger the output manager to open the library for writing
output.startOfLibrary();
// loop over each shower
for (int i_shower = 1; i_shower < nevent + 1; i_shower++) {
CORSIKA_LOG_INFO("Shower {} / {} ", i_shower, nevent);
// randomize the primary energy
double const eSlope = app["--eslope"]->as<double>();
PowerLawDistribution<HEPEnergyType> powerLawRng(eSlope, eMin, eMax);
HEPEnergyType const primaryTotalEnergy =
(eMax == eMin) ? eMin
: powerLawRng(RNGManager<>::getInstance().getRandomStream(
"primary_particle"));
auto const eKin = primaryTotalEnergy - get_mass(beamCode);
// set up thinning based on primary parameters
double const emthinfrac = app["--emthin"]->as<double>();
double const maxWeight = std::invoke([&]() {
if (auto const wm = app["--max-weight"]->as<double>(); wm > 0)
return wm;
else
return 0.5 * emthinfrac * primaryTotalEnergy / 1_GeV;
});
EMThinning thinning{emthinfrac * primaryTotalEnergy, maxWeight, !multithin};
// set up the stack inspector
StackInspector<StackType> stackInspect(10000, false, primaryTotalEnergy);
// assemble the final process sequence
auto sequence =
make_sequence(stackInspect, neutrinoPrimaryPythia, hadronSequence, decaySequence,
emCascade, emContinuous, coreas, zhs, longprof, observationLevel,
inter_writer, thinning, cut);
// create the cascade object using the default stack and tracking
// implementation
TrackingType tracking(app["--max-deflection-angle"]->as<double>());
StackType stack;
Cascade EAS(env, tracking, sequence, output, stack);
// setup particle stack, and add primary particle
stack.clear();
// print our primary parameters all in one place
CORSIKA_LOG_INFO("Primary name: {}", beamCode);
if (app["--pdg"]->count() > 0) {
CORSIKA_LOG_INFO("Primary PDG ID: {}", app["--pdg"]->as<int>());
} else {
CORSIKA_LOG_INFO("Primary Z/A: {}/{}", Z, A);
}
CORSIKA_LOG_INFO("Primary Total Energy: {}", primaryTotalEnergy);
CORSIKA_LOG_INFO("Primary Momentum: {}",
calculate_momentum(primaryTotalEnergy, get_mass(beamCode)));
CORSIKA_LOG_INFO("Primary Direction: {}", propDir.getNorm());
CORSIKA_LOG_INFO("Point of Injection: {}", injectionPos.getCoordinates());
CORSIKA_LOG_INFO("Shower Axis Length: {}",
(showerCore - injectionPos).getNorm() * 1.2);
// add the desired particle to the stack
auto const primaryProperties =
std::make_tuple(beamCode, eKin, propDir.normalized(), injectionPos, 0_ns);
stack.addParticle(primaryProperties);
// if we want to fix the first location of the shower
if (force_interaction) {
CORSIKA_LOG_INFO("Fixing first interaction at injection point.");
EAS.forceInteraction();
}
if (force_decay) {
CORSIKA_LOG_INFO("Forcing the primary to decay");
EAS.forceDecay();
}
primaryWriter.recordPrimary(primaryProperties);
// run the shower
EAS.run();
HEPEnergyType const Efinal =
dEdX.getEnergyLost() + observationLevel.getEnergyGround();
CORSIKA_LOG_INFO(
"total energy budget (GeV): {} (dEdX={} ground={}), "
"relative difference (%): {}",
Efinal / 1_GeV, dEdX.getEnergyLost() / 1_GeV,
observationLevel.getEnergyGround() / 1_GeV,
(Efinal / primaryTotalEnergy - 1) * 100);
if (!disable_interaction_hists) {
CORSIKA_LOG_INFO("Saving interaction histograms");
auto const hists = heCounted.getHistogram() + leIntCounted.getHistogram();
// directory for output of interaction histograms
string const outdir(app["--filename"]->as<std::string>() + "/interaction_hist");
boost::filesystem::create_directories(outdir);
string const labHist_file = outdir + "/inthist_lab_" + to_string(i_shower) + ".npz";
string const cMSHist_file = outdir + "/inthist_cms_" + to_string(i_shower) + ".npz";
save_hist(hists.labHist(), labHist_file, true);
save_hist(hists.CMSHist(), cMSHist_file, true);
}
}
// and finalize the output on disk
output.endOfLibrary();
return EXIT_SUCCESS;
}
# Copyright (c) 2012 - 2017, Lars Bilke
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# CHANGES:
#
# 2012-01-31, Lars Bilke
# - Enable Code Coverage
#
# 2013-09-17, Joakim Söderberg
# - Added support for Clang.
# - Some additional usage instructions.
#
# 2016-02-03, Lars Bilke
# - Refactored functions to use named parameters
#
# 2017-06-02, Lars Bilke
# - Merged with modified version from github.com/ufz/ogs
#
#
# USAGE:
#
# 1. Copy this file into your cmake modules path.
#
# 2. Add the following line to your CMakeLists.txt:
# include(CodeCoverage)
#
# 3. Append necessary compiler flags:
# APPEND_COVERAGE_COMPILER_FLAGS()
#
# 4. If you need to exclude additional directories from the report, specify them
# using the COVERAGE_LCOV_EXCLUDES variable before calling SETUP_TARGET_FOR_COVERAGE_LCOV.
# Example:
# set(COVERAGE_LCOV_EXCLUDES 'dir1/*' 'dir2/*')
#
# 5. Use the functions described below to create a custom make target which
# runs your test executable and produces a code coverage report.
#
# 6. Build a Debug build:
# cmake -DCMAKE_BUILD_TYPE=Debug ..
# make
# make my_coverage_target
#
include(CMakeParseArguments)
# Check prereqs
find_program( GCOV_PATH gcov )
find_program( LCOV_PATH NAMES lcov lcov.bat lcov.exe lcov.perl)
find_program( GENHTML_PATH NAMES genhtml genhtml.perl genhtml.bat )
find_program( GCOVR_PATH gcovr PATHS ${CMAKE_SOURCE_DIR}/scripts/test)
find_program( SIMPLE_PYTHON_EXECUTABLE python )
if(NOT GCOV_PATH)
message(FATAL_ERROR "gcov not found! Aborting...")
endif() # NOT GCOV_PATH
if("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if("${CMAKE_CXX_COMPILER_VERSION}" VERSION_LESS 3)
message(FATAL_ERROR "Clang version must be 3.0.0 or greater! Aborting...")
endif()
elseif(NOT CMAKE_COMPILER_IS_GNUCXX)
message(FATAL_ERROR "Compiler is not GNU gcc! Aborting...")
endif()
set(COVERAGE_COMPILER_FLAGS "-g -O0 --coverage -fprofile-arcs -ftest-coverage"
CACHE INTERNAL "")
set(CMAKE_CXX_FLAGS_COVERAGE
${COVERAGE_COMPILER_FLAGS}
CACHE STRING "Flags used by the C++ compiler during coverage builds."
FORCE )
set(CMAKE_C_FLAGS_COVERAGE
${COVERAGE_COMPILER_FLAGS}
CACHE STRING "Flags used by the C compiler during coverage builds."
FORCE )
set(CMAKE_EXE_LINKER_FLAGS_COVERAGE
""
CACHE STRING "Flags used for linking binaries during coverage builds."
FORCE )
set(CMAKE_SHARED_LINKER_FLAGS_COVERAGE
""
CACHE STRING "Flags used by the shared libraries linker during coverage builds."
FORCE )
mark_as_advanced(
CMAKE_CXX_FLAGS_COVERAGE
CMAKE_C_FLAGS_COVERAGE
CMAKE_EXE_LINKER_FLAGS_COVERAGE
CMAKE_SHARED_LINKER_FLAGS_COVERAGE )
if(NOT CMAKE_BUILD_TYPE STREQUAL "Debug")
message(WARNING "Code coverage results with an optimised (non-Debug) build may be misleading")
endif() # NOT CMAKE_BUILD_TYPE STREQUAL "Debug"
if(CMAKE_C_COMPILER_ID STREQUAL "GNU")
link_libraries(gcov)
else()
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} --coverage")
endif()
# Defines a target for running and collection code coverage information
# Builds dependencies, runs the given executable and outputs reports.
# NOTE! The executable should always have a ZERO as exit code otherwise
# the coverage generation will not complete.
#
# SETUP_TARGET_FOR_COVERAGE_LCOV(
# NAME testrunner_coverage # New target name
# EXECUTABLE testrunner -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR
# DEPENDENCIES testrunner # Dependencies to build first
# )
function(SETUP_TARGET_FOR_COVERAGE_LCOV)
set(options NONE)
set(oneValueArgs NAME)
set(multiValueArgs EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT LCOV_PATH)
message(FATAL_ERROR "lcov not found! Aborting...")
endif() # NOT LCOV_PATH
if(NOT GENHTML_PATH)
message(FATAL_ERROR "genhtml not found! Aborting...")
endif() # NOT GENHTML_PATH
# Setup target
add_custom_target(${Coverage_NAME}
# Cleanup lcov
COMMAND ${LCOV_PATH} --gcov-tool ${GCOV_PATH} -directory . --zerocounters
# Create baseline to make sure untouched files show up in the report
COMMAND ${LCOV_PATH} --gcov-tool ${GCOV_PATH} -c -i -d . -o ${Coverage_NAME}.base
# Run tests
COMMAND ${Coverage_EXECUTABLE}
# Capturing lcov counters and generating report
COMMAND ${LCOV_PATH} --gcov-tool ${GCOV_PATH} --directory . --capture --output-file ${Coverage_NAME}.info
# add baseline counters
COMMAND ${LCOV_PATH} --gcov-tool ${GCOV_PATH} -a ${Coverage_NAME}.base -a ${Coverage_NAME}.info --output-file ${Coverage_NAME}.total
COMMAND ${LCOV_PATH} --gcov-tool ${GCOV_PATH} --remove ${Coverage_NAME}.total ${COVERAGE_LCOV_EXCLUDES} --output-file ${PROJECT_BINARY_DIR}/${Coverage_NAME}.info.cleaned
COMMAND ${GENHTML_PATH} -o ${Coverage_NAME} ${PROJECT_BINARY_DIR}/${Coverage_NAME}.info.cleaned
COMMAND ${CMAKE_COMMAND} -E remove ${Coverage_NAME}.base ${Coverage_NAME}.total ${PROJECT_BINARY_DIR}/${Coverage_NAME}.info.cleaned
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
COMMENT "Resetting code coverage counters to zero.\nProcessing code coverage counters and generating report."
)
# Show where to find the lcov info report
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
COMMAND ;
COMMENT "Lcov code coverage info report saved in ${Coverage_NAME}.info."
)
# Show info where to find the report
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
COMMAND ;
COMMENT "Open ./${Coverage_NAME}/index.html in your browser to view the coverage report."
)
endfunction() # SETUP_TARGET_FOR_COVERAGE_LCOV
# Defines a target for running and collection code coverage information
# Builds dependencies, runs the given executable and outputs reports.
# NOTE! The executable should always have a ZERO as exit code otherwise
# the coverage generation will not complete.
#
# SETUP_TARGET_FOR_COVERAGE_GCOVR_XML(
# NAME ctest_coverage # New target name
# EXECUTABLE ctest -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR
# DEPENDENCIES executable_target # Dependencies to build first
# )
function(SETUP_TARGET_FOR_COVERAGE_GCOVR_XML)
set(options NONE)
set(oneValueArgs NAME)
set(multiValueArgs EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT SIMPLE_PYTHON_EXECUTABLE)
message(FATAL_ERROR "python not found! Aborting...")
endif() # NOT SIMPLE_PYTHON_EXECUTABLE
if(NOT GCOVR_PATH)
message(FATAL_ERROR "gcovr not found! Aborting...")
endif() # NOT GCOVR_PATH
# Combine excludes to several -e arguments
set(GCOVR_EXCLUDES "")
foreach(EXCLUDE ${COVERAGE_GCOVR_EXCLUDES})
list(APPEND GCOVR_EXCLUDES "-e")
list(APPEND GCOVR_EXCLUDES "${EXCLUDE}")
endforeach()
add_custom_target(${Coverage_NAME}
# Run tests
${Coverage_EXECUTABLE}
# Running gcovr
COMMAND ${GCOVR_PATH} --xml
-r ${PROJECT_SOURCE_DIR} ${GCOVR_EXCLUDES}
--object-directory=${PROJECT_BINARY_DIR}
-o ${Coverage_NAME}.xml
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
COMMENT "Running gcovr to produce Cobertura code coverage report."
)
# Show info where to find the report
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
COMMAND ;
COMMENT "Cobertura code coverage report saved in ${Coverage_NAME}.xml."
)
endfunction() # SETUP_TARGET_FOR_COVERAGE_GCOVR_XML
# Defines a target for running and collection code coverage information
# Builds dependencies, runs the given executable and outputs reports.
# NOTE! The executable should always have a ZERO as exit code otherwise
# the coverage generation will not complete.
#
# SETUP_TARGET_FOR_COVERAGE_GCOVR_HTML(
# NAME ctest_coverage # New target name
# EXECUTABLE ctest -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR
# DEPENDENCIES executable_target # Dependencies to build first
# )
function(SETUP_TARGET_FOR_COVERAGE_GCOVR_HTML)
set(options NONE)
set(oneValueArgs NAME)
set(multiValueArgs EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT SIMPLE_PYTHON_EXECUTABLE)
message(FATAL_ERROR "python not found! Aborting...")
endif() # NOT SIMPLE_PYTHON_EXECUTABLE
if(NOT GCOVR_PATH)
message(FATAL_ERROR "gcovr not found! Aborting...")
endif() # NOT GCOVR_PATH
# Combine excludes to several -e arguments
set(GCOVR_EXCLUDES "")
foreach(EXCLUDE ${COVERAGE_GCOVR_EXCLUDES})
list(APPEND GCOVR_EXCLUDES "-e")
list(APPEND GCOVR_EXCLUDES "${EXCLUDE}")
endforeach()
add_custom_target(${Coverage_NAME}
# Run tests
${Coverage_EXECUTABLE}
# Create folder
COMMAND ${CMAKE_COMMAND} -E make_directory ${PROJECT_BINARY_DIR}/${Coverage_NAME}
# Running gcovr
COMMAND ${GCOVR_PATH} --html --html-details
-r ${PROJECT_SOURCE_DIR} ${GCOVR_EXCLUDES}
--object-directory=${PROJECT_BINARY_DIR}
-o ${Coverage_NAME}/index.html
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
COMMENT "Running gcovr to produce HTML code coverage report."
)
# Show info where to find the report
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
COMMAND ;
COMMENT "Open ./${Coverage_NAME}/index.html in your browser to view the coverage report."
)
endfunction() # SETUP_TARGET_FOR_COVERAGE_GCOVR_HTML
function(APPEND_COVERAGE_COMPILER_FLAGS)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${COVERAGE_COMPILER_FLAGS}" PARENT_SCOPE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COVERAGE_COMPILER_FLAGS}" PARENT_SCOPE)
message(STATUS "Appending code coverage compiler flags: ${COVERAGE_COMPILER_FLAGS}")
endfunction() # APPEND_COVERAGE_COMPILER_FLAGS
# Copyright (c) 2012 - 2017, Lars Bilke
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# CHANGES:
#
# 2012-01-31, Lars Bilke
# - Enable Code Coverage
#
# 2013-09-17, Joakim Söderberg
# - Added support for Clang.
# - Some additional usage instructions.
#
# 2016-02-03, Lars Bilke
# - Refactored functions to use named parameters
#
# 2017-06-02, Lars Bilke
# - Merged with modified version from github.com/ufz/ogs
#
#
# USAGE:
#
# 1. Copy this file into your cmake modules path.
#
# 2. Add the following line to your CMakeLists.txt:
# include(CodeCoverage)
#
# 3. Append necessary compiler flags:
# APPEND_COVERAGE_COMPILER_FLAGS()
#
# 4. If you need to exclude additional directories from the report, specify them
# using the COVERAGE_LCOV_EXCLUDES variable before calling SETUP_TARGET_FOR_COVERAGE_LCOV.
# Example:
# set(COVERAGE_LCOV_EXCLUDES 'dir1/*' 'dir2/*')
#
# 5. Use the functions described below to create a custom make target which
# runs your test executable and produces a code coverage report.
#
# 6. Build a Debug build:
# cmake -DCMAKE_BUILD_TYPE=Debug ..
# make
# make my_coverage_target
#
include(CMakeParseArguments)
# Check prereqs
find_program( GCOV_PATH gcov )
find_program( LCOV_PATH NAMES lcov lcov.bat lcov.exe lcov.perl)
find_program( GENHTML_PATH NAMES genhtml genhtml.perl genhtml.bat )
find_program( GCOVR_PATH gcovr PATHS ${CMAKE_SOURCE_DIR}/scripts/test)
find_program( SIMPLE_PYTHON_EXECUTABLE python )
if(NOT GCOV_PATH)
message(FATAL_ERROR "gcov not found! Aborting...")
endif() # NOT GCOV_PATH
if("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
if("${CMAKE_CXX_COMPILER_VERSION}" VERSION_LESS 3)
message(FATAL_ERROR "Clang version must be 3.0.0 or greater! Aborting...")
endif()
elseif(NOT CMAKE_COMPILER_IS_GNUCXX)
message(FATAL_ERROR "Compiler is not GNU gcc! Aborting...")
endif()
set(COVERAGE_COMPILER_FLAGS "-g -O0 --coverage -fprofile-arcs -ftest-coverage"
CACHE INTERNAL "")
set(CMAKE_CXX_FLAGS_COVERAGE
${COVERAGE_COMPILER_FLAGS}
CACHE STRING "Flags used by the C++ compiler during coverage builds."
FORCE )
set(CMAKE_C_FLAGS_COVERAGE
${COVERAGE_COMPILER_FLAGS}
CACHE STRING "Flags used by the C compiler during coverage builds."
FORCE )
set(CMAKE_EXE_LINKER_FLAGS_COVERAGE
""
CACHE STRING "Flags used for linking binaries during coverage builds."
FORCE )
set(CMAKE_SHARED_LINKER_FLAGS_COVERAGE
""
CACHE STRING "Flags used by the shared libraries linker during coverage builds."
FORCE )
mark_as_advanced(
CMAKE_CXX_FLAGS_COVERAGE
CMAKE_C_FLAGS_COVERAGE
CMAKE_EXE_LINKER_FLAGS_COVERAGE
CMAKE_SHARED_LINKER_FLAGS_COVERAGE )
if(NOT CMAKE_BUILD_TYPE STREQUAL "Debug")
message(WARNING "Code coverage results with an optimised (non-Debug) build may be misleading")
endif() # NOT CMAKE_BUILD_TYPE STREQUAL "Debug"
if(CMAKE_C_COMPILER_ID STREQUAL "GNU")
link_libraries(gcov)
else()
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} --coverage")
endif()
# Defines a target for running and collection code coverage information
# Builds dependencies, runs the given executable and outputs reports.
# NOTE! The executable should always have a ZERO as exit code otherwise
# the coverage generation will not complete.
#
# SETUP_TARGET_FOR_COVERAGE_LCOV(
# NAME testrunner_coverage # New target name
# EXECUTABLE testrunner -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR
# DEPENDENCIES testrunner # Dependencies to build first
# )
function(SETUP_TARGET_FOR_COVERAGE_LCOV)
set(options NONE)
set(oneValueArgs NAME)
set(multiValueArgs EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT LCOV_PATH)
message(FATAL_ERROR "lcov not found! Aborting...")
endif() # NOT LCOV_PATH
if(NOT GENHTML_PATH)
message(FATAL_ERROR "genhtml not found! Aborting...")
endif() # NOT GENHTML_PATH
# Setup target
add_custom_target(${Coverage_NAME}
# Cleanup lcov
COMMAND ${LCOV_PATH} --gcov-tool ${GCOV_PATH} -directory . --zerocounters
# Create baseline to make sure untouched files show up in the report
COMMAND ${LCOV_PATH} --gcov-tool ${GCOV_PATH} -c -i -d . -o ${Coverage_NAME}.base
# Run tests
COMMAND ${Coverage_EXECUTABLE}
# Capturing lcov counters and generating report
COMMAND ${LCOV_PATH} --gcov-tool ${GCOV_PATH} --directory . --capture --output-file ${Coverage_NAME}.info
# add baseline counters
COMMAND ${LCOV_PATH} --gcov-tool ${GCOV_PATH} -a ${Coverage_NAME}.base -a ${Coverage_NAME}.info --output-file ${Coverage_NAME}.total
COMMAND ${LCOV_PATH} --gcov-tool ${GCOV_PATH} --remove ${Coverage_NAME}.total ${COVERAGE_LCOV_EXCLUDES} --output-file ${PROJECT_BINARY_DIR}/${Coverage_NAME}.info.cleaned
COMMAND ${GENHTML_PATH} -o ${Coverage_NAME} ${PROJECT_BINARY_DIR}/${Coverage_NAME}.info.cleaned
COMMAND ${CMAKE_COMMAND} -E remove ${Coverage_NAME}.base ${Coverage_NAME}.total ${PROJECT_BINARY_DIR}/${Coverage_NAME}.info.cleaned
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
COMMENT "Resetting code coverage counters to zero.\nProcessing code coverage counters and generating report."
)
# Show where to find the lcov info report
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
COMMAND ;
COMMENT "Lcov code coverage info report saved in ${Coverage_NAME}.info."
)
# Show info where to find the report
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
COMMAND ;
COMMENT "Open ./${Coverage_NAME}/index.html in your browser to view the coverage report."
)
endfunction() # SETUP_TARGET_FOR_COVERAGE_LCOV
# Defines a target for running and collection code coverage information
# Builds dependencies, runs the given executable and outputs reports.
# NOTE! The executable should always have a ZERO as exit code otherwise
# the coverage generation will not complete.
#
# SETUP_TARGET_FOR_COVERAGE_GCOVR_XML(
# NAME ctest_coverage # New target name
# EXECUTABLE ctest -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR
# DEPENDENCIES executable_target # Dependencies to build first
# )
function(SETUP_TARGET_FOR_COVERAGE_GCOVR_XML)
set(options NONE)
set(oneValueArgs NAME)
set(multiValueArgs EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT SIMPLE_PYTHON_EXECUTABLE)
message(FATAL_ERROR "python not found! Aborting...")
endif() # NOT SIMPLE_PYTHON_EXECUTABLE
if(NOT GCOVR_PATH)
message(FATAL_ERROR "gcovr not found! Aborting...")
endif() # NOT GCOVR_PATH
# Combine excludes to several -e arguments
set(GCOVR_EXCLUDES "")
foreach(EXCLUDE ${COVERAGE_GCOVR_EXCLUDES})
list(APPEND GCOVR_EXCLUDES "-e")
list(APPEND GCOVR_EXCLUDES "${EXCLUDE}")
endforeach()
add_custom_target(${Coverage_NAME}
# Run tests
${Coverage_EXECUTABLE}
# Running gcovr
COMMAND ${GCOVR_PATH} --xml
-r ${PROJECT_SOURCE_DIR} ${GCOVR_EXCLUDES}
--object-directory=${PROJECT_BINARY_DIR}
-o ${Coverage_NAME}.xml
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
COMMENT "Running gcovr to produce Cobertura code coverage report."
)
# Show info where to find the report
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
COMMAND ;
COMMENT "Cobertura code coverage report saved in ${Coverage_NAME}.xml."
)
endfunction() # SETUP_TARGET_FOR_COVERAGE_GCOVR_XML
# Defines a target for running and collection code coverage information
# Builds dependencies, runs the given executable and outputs reports.
# NOTE! The executable should always have a ZERO as exit code otherwise
# the coverage generation will not complete.
#
# SETUP_TARGET_FOR_COVERAGE_GCOVR_HTML(
# NAME ctest_coverage # New target name
# EXECUTABLE ctest -j ${PROCESSOR_COUNT} # Executable in PROJECT_BINARY_DIR
# DEPENDENCIES executable_target # Dependencies to build first
# )
function(SETUP_TARGET_FOR_COVERAGE_GCOVR_HTML)
set(options NONE)
set(oneValueArgs NAME)
set(multiValueArgs EXECUTABLE EXECUTABLE_ARGS DEPENDENCIES)
cmake_parse_arguments(Coverage "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT SIMPLE_PYTHON_EXECUTABLE)
message(FATAL_ERROR "python not found! Aborting...")
endif() # NOT SIMPLE_PYTHON_EXECUTABLE
if(NOT GCOVR_PATH)
message(FATAL_ERROR "gcovr not found! Aborting...")
endif() # NOT GCOVR_PATH
# Combine excludes to several -e arguments
set(GCOVR_EXCLUDES "")
foreach(EXCLUDE ${COVERAGE_GCOVR_EXCLUDES})
list(APPEND GCOVR_EXCLUDES "-e")
list(APPEND GCOVR_EXCLUDES "${EXCLUDE}")
endforeach()
add_custom_target(${Coverage_NAME}
# Run tests
${Coverage_EXECUTABLE}
# Create folder
COMMAND ${CMAKE_COMMAND} -E make_directory ${PROJECT_BINARY_DIR}/${Coverage_NAME}
# Running gcovr
COMMAND ${GCOVR_PATH} --html --html-details
-r ${PROJECT_SOURCE_DIR} ${GCOVR_EXCLUDES}
--object-directory=${PROJECT_BINARY_DIR}
-o ${Coverage_NAME}/index.html
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
DEPENDS ${Coverage_DEPENDENCIES}
COMMENT "Running gcovr to produce HTML code coverage report."
)
# Show info where to find the report
add_custom_command(TARGET ${Coverage_NAME} POST_BUILD
COMMAND ;
COMMENT "Open ./${Coverage_NAME}/index.html in your browser to view the coverage report."
)
endfunction() # SETUP_TARGET_FOR_COVERAGE_GCOVR_HTML
function(APPEND_COVERAGE_COMPILER_FLAGS)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${COVERAGE_COMPILER_FLAGS}" PARENT_SCOPE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COVERAGE_COMPILER_FLAGS}" PARENT_SCOPE)
message(STATUS "Appending code coverage compiler flags: ${COVERAGE_COMPILER_FLAGS}")
endfunction() # APPEND_COVERAGE_COMPILER_FLAGS
#
# (c) Copyright 2020 CORSIKA Project, corsika-project@lists.kit.edu
#
# See file AUTHORS for a list of contributors.
#
# This software is distributed under the terms of the 3-clause BSD license.
# See file LICENSE for a full version of the license.
#
# - find Conex
#
# This module defines
# CONEX_PREFIX
# CONEX_INCLUDE_DIR
# CONEX_LIBRARY
set (SEARCH_conex_
${WITH_CONEX}
${CONEXROOT}
${CONEX_ROOT}
$ENV{CONEXROOT}
$ENV{CONEX_ROOT}
)
find_path (CONEX_PREFIX
NAMES lib/${CMAKE_SYSTEM_NAME}
PATHS ${SEARCH_conex_}
DOC "The CONEX root directory"
NO_DEFAULT_PATH
)
find_path (CONEX_INCLUDE_DIR
NAMES ConexDynamicInterface.h
PATHS ${CONEX_PREFIX}
PATH_SUFFIXES src
DOC "The CONEX include directory"
)
find_library (CONEX_LIBRARY
NAMES libCONEXdynamic.a
PATHS ${CONEX_PREFIX}
PATH_SUFFIXES lib/${CMAKE_SYSTEM_NAME}
DOC "The CONEX library"
)
# standard cmake infrastructure:
include (FindPackageHandleStandardArgs)
find_package_handle_standard_args (CONEX
"Did not find system-level CONEX."
CONEX_INCLUDE_DIR CONEX_LIBRARY CONEX_PREFIX)
mark_as_advanced (CONEX_INCLUDE_DIR CONEX_LIBRARY CONEX_PREFIX)
#
# (c) Copyright 2018 CORSIKA Project, corsika-project@lists.kit.edu
#
# See file AUTHORS for a list of contributors.
#
# This software is distributed under the terms of the 3-clause BSD license.
# See file LICENSE for a full version of the license.
#
add_library (PhysUnits INTERFACE)
target_compile_options (PhysUnits
INTERFACE
-I${CMAKE_SOURCE_DIR}/externals/phys_units
)
set (PhysUnits_FOUND True)
#
# (c) Copyright 2018 CORSIKA Project, corsika-project@lists.kit.edu
#
# See file AUTHORS for a list of contributors.
#
# This software is distributed under the terms of the 3-clause BSD license.
# See file LICENSE for a full version of the license.
#
#################################################
#
# run pythia8-config and interpret result
#
function (_Pythia8_CONFIG_ option variable type doc)
execute_process (COMMAND ${Pythia8_CONFIG} ${option}
OUTPUT_VARIABLE _local_out_
RESULT_VARIABLE _local_res_)
string (REGEX REPLACE "\n$" "" _local_out_ "${_local_out_}")
if (NOT ${_local_res_} EQUAL 0)
message ("Error in running ${Pythia8_CONFIG} ${option}")
else ()
set (${variable} "${_local_out_}" CACHE ${type} ${doc})
endif ()
endfunction (_Pythia8_CONFIG_)
#################################################
#
# take directory and assume standard install layout
#
function (_Pythia8_LAYOUT_ dir variable type doc)
set (${variable} "${dir}" CACHE ${type} ${doc})
endfunction (_Pythia8_LAYOUT_)
#################################################
#
# Searched Pythia8 on system. Expect pythia8-config in PATH, or typical installation location
#
# This module defines
# HAVE_Pythia8
# Pythia8_INCLUDE_DIR where to locate Pythia.h file
# Pythia8_LIBRARY where to find the libpythia8 library
# Pythia8_LIBRARIES (not cached) the libraries to link against to use Pythia8
# Pythia8_VERSION version of Pythia8 if found
#
set (_SEARCH_Pythia8_
${PROJECT_BINARY_DIR}/ThirdParty/pythia8-install
${PYTHIA8}
$ENV{PYTHIA8}
${PYTHIA8DIR}
$ENV{PYTHIA8DIR}
${PYTHIA8_ROOT}
$ENV{PYTHIA8_ROOT}
${PYTHIA8_DIR}
$ENV{PYTHIA8_DIR}
${Pythia8_DIR}
$ENV{Pythia8_DIR}
/opt/pythia8
)
find_file (Pythia8_Pythia_h_LOC
NAME Pythia.h
PATHS ${_SEARCH_Pythia8_}
PATH_SUFFIXES include/Pythia8
DOC "The location of the Pythia8/Pythia.h script"
REQUIRED)
if ("${Pythia8_Pythia_h_LOC}" STREQUAL "Pythia8_Pythia_h_LOC-NOTFOUND")
message (FATAL_ERROR "Did not find SYSTEM-level Pythia8 in: \"${_SEARCH_Pythia8_}\"")
endif ()
string (REPLACE "/include/Pythia8/Pythia.h" "" Pythia8_DIR ${Pythia8_Pythia_h_LOC})
set (Pythia8_CONFIG ${Pythia8_DIR}/bin/pythia-config)
if (Pythia8_CONFIG)
set (HAVE_Pythia8 1 CACHE BOOL "presence of pythia8, found via pythia8-config")
# pythia-config is not relocatable
#_Pythia8_CONFIG_ ("--prefix" Pythia8_PREFIX PATH "location of pythia8 installation")
#_Pythia8_CONFIG_ ("--includedir" Pythia8_INCLUDE_DIR PATH "pythia8 include directory")
#_Pythia8_CONFIG_ ("--libdir" Pythia8_LIBRARY STRING "the pythia8 libs")
#_Pythia8_CONFIG_ ("--datadir" Pythia8_DATA_DIR PATH "the pythia8 data dir")
_Pythia8_LAYOUT_ ("${Pythia8_DIR}" Pythia8_PREFIX PATH "location of pythia8 installation")
_Pythia8_LAYOUT_ ("${Pythia8_DIR}/include" Pythia8_INCLUDE_DIR PATH "pythia8 include directory")
_Pythia8_LAYOUT_ ("${Pythia8_DIR}/lib" Pythia8_LIBRARY STRING "the pythia8 libs")
_Pythia8_LAYOUT_ ("${Pythia8_DIR}/share/Pythia8/xmldoc" Pythia8_DATA_DIR PATH "the pythia8 data dir")
# read the config string
file (READ "${Pythia8_INCLUDE_DIR}/Pythia8/Pythia.h" Pythia8_TMP_PYTHIA_H)
string (REGEX MATCH "#define PYTHIA_VERSION_INTEGER ([0-9]*)" _ ${Pythia8_TMP_PYTHIA_H})
set (Pythia8_VERSION ${CMAKE_MATCH_1})
message (STATUS "Found Pythia8 version: ${Pythia8_VERSION}")
endif ()
# standard cmake infrastructure:
include (FindPackageHandleStandardArgs)
find_package_handle_standard_args (Pythia8 REQUIRED_VARS Pythia8_PREFIX Pythia8_INCLUDE_DIR Pythia8_LIBRARY VERSION_VAR Pythia8_VERSION)
mark_as_advanced (Pythia8_DATA_DIR Pythia8_INCLUDE_DIR Pythia8_LIBRARY)
#
# (c) Copyright 2018 CORSIKA Project, corsika-project@lists.kit.edu
#
# See file AUTHORS for a list of contributors.
#
# This software is distributed under the terms of the 3-clause BSD license.
# See file LICENSE for a full version of the license.
#
# Look for an executable called sphinx-build
find_program (SPHINX_EXECUTABLE
NAMES sphinx-build
DOC "Path to sphinx-build executable")
include (FindPackageHandleStandardArgs)
#Handle standard arguments to find_package like REQUIRED and QUIET
find_package_handle_standard_args (Sphinx
"Failed to find sphinx-build executable"
SPHINX_EXECUTABLE)
#
# (c) Copyright 2018 CORSIKA Project, corsika-project@lists.kit.edu
#
# See file AUTHORS for a list of contributors.
#
# This software is distributed under the terms of the 3-clause BSD license.
# See file LICENSE for a full version of the license.
#
set (CORSIKA8_VERSION @c8_version@)
@PACKAGE_INIT@
#+++++++++++++++++++++++++++++
# Setup hardware and infrastructure dependent defines and other setting
#
include (${CMAKE_CURRENT_LIST_DIR}/corsikaDefines.cmake)
#+++++++++++++++++++++++++++
# Options
#
option (WITH_HISTORY "Flag to switch on/off HISTORY" ON)
#++++++++++++++++++++++++++++
# General config and flags
#
set (CMAKE_CXX_STANDARD @CMAKE_CXX_STANDARD@)
set (CMAKE_CXX_EXTENSIONS @CMAKE_CXX_EXTENSIONS@)
set (COMPILE_OPTIONS @COMPILE_OPTIONS@)
set (CMAKE_VERBOSE_MAKEFILE @CMAKE_VERBOSE_MAKEFILE@)
#+++++++++++++++++++++++++++++
# external dependencies
# same list as top-level CML.txt, except for Catch2 (not needed here)
#
find_package(Boost COMPONENTS filesystem REQUIRED)
find_package(CLI11 REQUIRED)
find_package(Eigen3 REQUIRED)
find_package(spdlog REQUIRED)
find_package(yaml-cpp REQUIRED)
find_package(Arrow REQUIRED)
find_package(PROPOSAL REQUIRED)
#+++++++++++++++++++++++++++++
# Import Pythia8
# since we always import pythia (ExternalProject_Add) we have to
# import here, too.
#
add_library (C8::ext::pythia8 STATIC IMPORTED GLOBAL)
set_target_properties (
C8::ext::pythia8 PROPERTIES
IMPORTED_LOCATION @Pythia8_LIBDIR@/libpythia8.a
IMPORTED_LINK_INTERFACE_LIBRARIES dl
INTERFACE_INCLUDE_DIRECTORIES @Pythia8_INCDIR@
)
set (Pythia8_FOUND @Pythia8_FOUND@)
message (STATUS "Pythia8 at: @Pythia8_PREFIX@")
#+++++++++++++++++++++++++++++
# Import TAUOLA
# since we always import TAUOLA (ExternalProject_Add) we have to
# import here, too.
#
add_library (C8::ext::tauola::CxxInterface STATIC IMPORTED GLOBAL)
add_library (C8::ext::tauola::Fortran STATIC IMPORTED GLOBAL)
add_library(C8::ext::tauola INTERFACE IMPORTED)
set_property(TARGET C8::ext::tauola
PROPERTY
INTERFACE_LINK_LIBRARIES
C8::ext::tauola::CxxInterface
C8::ext::tauola::Fortran)
set_target_properties (
C8::ext::tauola::CxxInterface PROPERTIES
IMPORTED_LOCATION @TAUOLA_LIBDIR@/libTauolaCxxInterface.a
IMPORTED_LINK_INTERFACE_LIBRARIES dl
INTERFACE_INCLUDE_DIRECTORIES @TAUOLA_INCDIR@
)
set_target_properties (
C8::ext::tauola::Fortran PROPERTIES
IMPORTED_LOCATION @TAUOLA_LIBDIR@/libTauolaFortran.a
IMPORTED_LINK_INTERFACE_LIBRARIES dl
INTERFACE_INCLUDE_DIRECTORIES @TAUOLA_INCDIR@
)
set (TAUOLA_FOUND @TAUOLA_FOUND@)
message (STATUS "TAUOLA at: @TAUOLA_PREFIX@")
#++++++++++++++++++++++++++++++
# import CORSIKA8
#
include ("${CMAKE_CURRENT_LIST_DIR}/corsikaTargets.cmake")
check_required_components (corsika)
#+++++++++++++++++++++++++++++++
# add further definitions / options
#
if (WITH_HISTORY)
set_property (
TARGET CORSIKA8::CORSIKA8
APPEND PROPERTY
INTERFACE_COMPILE_DEFINITIONS "WITH_HISTORY"
)
endif (WITH_HISTORY)
#+++++++++++++++++++++++++++++++
#
# final summary output
#
include (FeatureSummary)
add_feature_info (HISTORY WITH_HISTORY "Full information on cascade history for particles.")
feature_summary (WHAT ALL)
#
# (c) Copyright 2018 CORSIKA Project, corsika-project@lists.kit.edu
#
# See file AUTHORS for a list of contributors.
#
# This software is distributed under the terms of the 3-clause BSD license.
# See file LICENSE for a full version of the license.
#
#+++++++++++++++++++++++++++++
# as long as there still are modules using it:
#
enable_language (Fortran)
set (CMAKE_Fortran_FLAGS "-std=legacy -Wfunction-elimination")
#+++++++++++++++++++++++++++++
# Build types settings
#
# setup coverage build type
set (CMAKE_CXX_FLAGS_COVERAGE "-g --coverage")
set (CMAKE_EXE_LINKER_FLAGS_COVERAGE "--coverage")
set (CMAKE_SHARED_LINKER_FLAGS_COVERAGE "--coverage")
# set a flag to inform code that we are in debug mode
set (CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -D_C8_DEBUG_")
#+++++++++++++++++++++++++++++
# Build type selection
#
# Set the possible values of build type for cmake-gui and command line check
set (ALLOWED_BUILD_TYPES Debug Release MinSizeRel RelWithDebInfo Coverage)
set_property (CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS ${ALLOWED_BUILD_TYPES})
set (DEFAULT_BUILD_TYPE "Release")
if (EXISTS "${CMAKE_SOURCE_DIR}/.git")
set (DEFAULT_BUILD_TYPE "Debug")
endif ()
if (NOT CMAKE_BUILD_TYPE AND NOT CMAKE_CONFIGURATION_TYPES)
message (STATUS "Setting build type to '${DEFAULT_BUILD_TYPE}' as no other was specified.")
set (CMAKE_BUILD_TYPE "${DEFAULT_BUILD_TYPE}" CACHE
STRING "Choose the type of build." FORCE)
else (NOT CMAKE_BUILD_TYPE AND NOT CMAKE_CONFIGURATION_TYPES)
# Ignore capitalization when build type is selected manually and check for valid setting
string (TOLOWER ${CMAKE_BUILD_TYPE} SELECTED_LOWER)
string (TOLOWER "${ALLOWED_BUILD_TYPES}" BUILD_TYPES_LOWER)
if (NOT SELECTED_LOWER IN_LIST BUILD_TYPES_LOWER)
message (FATAL_ERROR "Unknown build type: ${CMAKE_BUILD_TYPE} [allowed: ${ALLOWED_BUILD_TYPES}]")
endif ()
message (STATUS "Build type is: ${CMAKE_BUILD_TYPE}")
endif (NOT CMAKE_BUILD_TYPE AND NOT CMAKE_CONFIGURATION_TYPES)
#
# Floating point exception support - select implementation to use
#
include (CheckIncludeFileCXX)
CHECK_INCLUDE_FILE_CXX ("fenv.h" HAS_FEENABLEEXCEPT)
if (HAS_FEENABLEEXCEPT) # FLOATING_POINT_ENVIRONMENT
set (CORSIKA_HAS_FEENABLEEXCEPT 1)
set_property (DIRECTORY ${CMAKE_HOME_DIRECTORY} APPEND PROPERTY COMPILE_DEFINITIONS "CORSIKA_HAS_FEENABLEEXCEPT")
endif ()
#
# General OS Detection
#
if (${CMAKE_SYSTEM_NAME} MATCHES "Windows")
set (CORSIKA_OS_WINDOWS TRUE)
set (CORSIKA_OS "Windows")
elseif (${CMAKE_SYSTEM_NAME} MATCHES "Linux")
set (CORSIKA_OS_LINUX TRUE)
set (CORSIKA_OS "Linux")
# check for RedHat/CentOS compiler from the software collections (scl)
string (FIND ${CMAKE_CXX_COMPILER} "/opt/rh/devtoolset-" index)
if (${index} EQUAL 0)
set (CORSIKA_SCL_CXX TRUE)
endif ()
elseif (${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
set (CORSIKA_OS_MAC TRUE)
set (CORSIKA_OS "Mac")
endif ()