running asap3 python module
Hello,
I installed ase, asap3 modules via pip, but running simple simulation gives error:
milias@hydra.jinr.ru:~/work/projects/open-collection/theoretical_chemistry/software/asap3/servers/hydra_jinr_ru/simpleMD/.module list
Currently Loaded Modulefiles:
1) GVR/v1.0-1 2) openmpi/v1.8.8-1 3) Python/v3.6.5
milias@hydra.jinr.ru:~/work/projects/open-collection/theoretical_chemistry/software/asap3/servers/hydra_jinr_ru/simpleMD/.python SimpleMD.py
[space04.hydra.local:25878] mca: base: component_find: unable to open /cvmfs/hybrilit.jinr.ru/sw/slc7_x86-64/openmpi/v1.8.8-1/lib/openmpi/mca_shmem_mmap: /cvmfs/hybrilit.jinr.ru/sw/slc7_x86-64/openmpi/v1.8.8-1/lib/openmpi/mca_shmem_mmap.so: undefined symbol: opal_shmem_base_framework (ignored)
[space04.hydra.local:25878] mca: base: component_find: unable to open /cvmfs/hybrilit.jinr.ru/sw/slc7_x86-64/openmpi/v1.8.8-1/lib/openmpi/mca_shmem_posix: /cvmfs/hybrilit.jinr.ru/sw/slc7_x86-64/openmpi/v1.8.8-1/lib/openmpi/mca_shmem_posix.so: undefined symbol: opal_shmem_base_framework (ignored)
[space04.hydra.local:25878] mca: base: component_find: unable to open /cvmfs/hybrilit.jinr.ru/sw/slc7_x86-64/openmpi/v1.8.8-1/lib/openmpi/mca_shmem_sysv: /cvmfs/hybrilit.jinr.ru/sw/slc7_x86-64/openmpi/v1.8.8-1/lib/openmpi/mca_shmem_sysv.so: undefined symbol: opal_shmem_base_framework (ignored)
--------------------------------------------------------------------------
It looks like opal_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
opal_shmem_base_select failed
--> Returned value -1 instead of OPAL_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
opal_init failed
--> Returned value Error (-1) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_mpi_init: ompi_rte_init failed
--> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[space04.hydra.local:25878] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed!
milias@hydra.jinr.ru:~/work/projects/open-collection/theoretical_chemistry/software/asap3/servers/hydra_jinr_ru/simpleMD/.