question_id
int64
82.3k
79.7M
title_clean
stringlengths
15
158
body_clean
stringlengths
62
28.5k
full_text
stringlengths
95
28.5k
tags
stringlengths
4
80
score
int64
0
1.15k
view_count
int64
22
1.62M
answer_count
int64
0
30
link
stringlengths
58
125
24,079,252
How do I run pentaho data integration transformation from repository?
I have pentaho data integration repository in postgres database. I want to run the job in this repository from remote server. How can I run this transformation from remote server using pan.sh?
How do I run pentaho data integration transformation from repository? I have pentaho data integration repository in postgres database. I want to run the job in this repository from remote server. How can I run this transformation from remote server using pan.sh?
pentaho, redhat
2
1,386
1
https://stackoverflow.com/questions/24079252/how-do-i-run-pentaho-data-integration-transformation-from-repository
23,236,910
error while installing respinned/customized centos
i am following this link with an aim to create custom CentOS ISO with some extra packages downloaded from internet (say ABCD.rpm). [URL] i customized the ISO by "only copying" ABCD.rpm package in /Packages directory now when i boot from ISO via kickstart, i get following error any idea where i am going wrong ?
error while installing respinned/customized centos i am following this link with an aim to create custom CentOS ISO with some extra packages downloaded from internet (say ABCD.rpm). [URL] i customized the ISO by "only copying" ABCD.rpm package in /Packages directory now when i boot from ISO via kickstart, i get following error any idea where i am going wrong ?
linux, linux-kernel, centos, redhat, linux-distro
2
1,876
3
https://stackoverflow.com/questions/23236910/error-while-installing-respinned-customized-centos
22,949,320
Groovy startup very slow
I have a problem when I start Groovy on one of my Linux machines - it takes about 30 seconds to execute very simple command: groovy -e "" if I run strace on it, here is what I see where it stops and waits: mprotect(0x7fae284e0000, 4096, PROT_NONE) = 0 clone(child_stack=0x7fae285dfff0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7fae285e09d0, tls=0x7fae285e0700, child_tidptr=0x7fae285e09d0) = 62660 futex(0x7fae285e09d0, FUTEX_WAIT, 62660, NULL <unfinished ...> Is there a way to figure out what it's waiting for and why and how to fix it? I am running Red Hat 6.3, Groovy Version: 2.2.1 JVM: 1.7.0_25 Vendor: Oracle Corporation OS: Linux and here is time command: bin$ time groovy -e "" real 0m22.255s user 0m26.875s sys 0m2.064s EDITED: as per the suggestion, did strace -f, here is what I see: [pid 49451] <... gettimeofday resumed> {1397076179, 998954}, NULL) = 0 [pid 49482] clock_gettime(CLOCK_MONOTONIC, <unfinished ...> [pid 49451] gettimeofday( <unfinished ...> [pid 49482] <... clock_gettime resumed> {10719052, 15135866}) = 0 [pid 49451] <... gettimeofday resumed> {1397076180, 871}, NULL) = 0 [pid 49482] gettimeofday({1397076180, 2272}, NULL) = 0 [pid 49451] gettimeofday( <unfinished ...> [pid 49482] futex(0x7fde3c145554, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1, {1397076180, 52272000}, ffffffff <unfinished ...> [pid 49451] <... gettimeofday resumed> {1397076180, 3226}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 5444}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 7123}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 8765}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 9766}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 10650}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 11611}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 12648}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 13569}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 14450}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 16851}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 17891}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 19012}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 20415}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 21734}, NULL) = 0 looks like it's waiting for gettimeofday, I see a lot of this in the trace. and here is how it ends: [pid 49475] gettimeofday({1397076182, 86016}, NULL) = 0 [pid 49475] futex(0x7fde3c008754, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7fde3c008750, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 [pid 49451] <... futex resumed> ) = 0 [pid 49475] madvise(0x7fddf09d6000, 1028096, MADV_DONTNEED <unfinished ...> [pid 49451] futex(0x7fde3c008728, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> [pid 49475] <... madvise resumed> ) = 0 [pid 49451] <... futex resumed> ) = 0 [pid 49475] _exit(0) = ? Process 49475 detached [pid 49451] rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 [pid 49451] unlink("/tmp/mydirectory/49439") = 0 [pid 49451] madvise(0x7fde42dcc000, 1028096, MADV_DONTNEED) = 0 [pid 49451] _exit(0) = ? Process 49451 detached [pid 49439] <... futex resumed> ) = 0 [pid 49439] exit_group(0) = ?
Groovy startup very slow I have a problem when I start Groovy on one of my Linux machines - it takes about 30 seconds to execute very simple command: groovy -e "" if I run strace on it, here is what I see where it stops and waits: mprotect(0x7fae284e0000, 4096, PROT_NONE) = 0 clone(child_stack=0x7fae285dfff0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7fae285e09d0, tls=0x7fae285e0700, child_tidptr=0x7fae285e09d0) = 62660 futex(0x7fae285e09d0, FUTEX_WAIT, 62660, NULL <unfinished ...> Is there a way to figure out what it's waiting for and why and how to fix it? I am running Red Hat 6.3, Groovy Version: 2.2.1 JVM: 1.7.0_25 Vendor: Oracle Corporation OS: Linux and here is time command: bin$ time groovy -e "" real 0m22.255s user 0m26.875s sys 0m2.064s EDITED: as per the suggestion, did strace -f, here is what I see: [pid 49451] <... gettimeofday resumed> {1397076179, 998954}, NULL) = 0 [pid 49482] clock_gettime(CLOCK_MONOTONIC, <unfinished ...> [pid 49451] gettimeofday( <unfinished ...> [pid 49482] <... clock_gettime resumed> {10719052, 15135866}) = 0 [pid 49451] <... gettimeofday resumed> {1397076180, 871}, NULL) = 0 [pid 49482] gettimeofday({1397076180, 2272}, NULL) = 0 [pid 49451] gettimeofday( <unfinished ...> [pid 49482] futex(0x7fde3c145554, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1, {1397076180, 52272000}, ffffffff <unfinished ...> [pid 49451] <... gettimeofday resumed> {1397076180, 3226}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 5444}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 7123}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 8765}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 9766}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 10650}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 11611}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 12648}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 13569}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 14450}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 16851}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 17891}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 19012}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 20415}, NULL) = 0 [pid 49451] gettimeofday({1397076180, 21734}, NULL) = 0 looks like it's waiting for gettimeofday, I see a lot of this in the trace. and here is how it ends: [pid 49475] gettimeofday({1397076182, 86016}, NULL) = 0 [pid 49475] futex(0x7fde3c008754, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7fde3c008750, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 [pid 49451] <... futex resumed> ) = 0 [pid 49475] madvise(0x7fddf09d6000, 1028096, MADV_DONTNEED <unfinished ...> [pid 49451] futex(0x7fde3c008728, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> [pid 49475] <... madvise resumed> ) = 0 [pid 49451] <... futex resumed> ) = 0 [pid 49475] _exit(0) = ? Process 49475 detached [pid 49451] rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 [pid 49451] unlink("/tmp/mydirectory/49439") = 0 [pid 49451] madvise(0x7fde42dcc000, 1028096, MADV_DONTNEED) = 0 [pid 49451] _exit(0) = ? Process 49451 detached [pid 49439] <... futex resumed> ) = 0 [pid 49439] exit_group(0) = ?
performance, groovy, redhat
2
1,352
1
https://stackoverflow.com/questions/22949320/groovy-startup-very-slow
21,490,051
bundle install fails on rugged but &#39;gem install rugged&#39; works
I am trying to install Gitorious on Redhat 5. I am following these instructions: [URL] One of the steps is: rake db:create RAILS_ENV=production That command fails with the foll error message: [ gitorious ] sudo rake db:create RAILS_ENV=production Could not find libdolt-0.33.14 in any of the sources Run bundle install to install missing gems. When I ran 'bundle install' it fails with: 0: makeup (0.4.4) from /usr/local/lib/ruby/gems/2.0.0/specifications/makeup-0.4.4.gemspec Gem::Ext::BuildError: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for gmake... yes checking for cmake... yes -- cmake .. -DBUILD_CLAR=OFF -DTHREADSAFE=ON -DBUILD_SHARED_LIBS=OFF -DCMAKE_C_FLAGS=-fPIC Package zlib was not found in the pkg-config search path. Perhaps you should add the directory containing zlib.pc' to the PKG_CONFIG_PATH environment variable Package 'zlib', required by 'libgit2', not found -- /usr/bin/gmake *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/local/bin/ruby --with-git2-dir --without-git2-dir --with-git2-include --without-git2-include=${git2-dir}/include --with-git2-lib --without-git2-lib=${git2-dir}/ extconf.rb:16:in sys': ERROR: '/usr/bin/gmake' failed (RuntimeError) from extconf.rb:59:in block (2 levels) in <main>' from extconf.rb:54:in chdir' from extconf.rb:54:in block in <main>' from extconf.rb:51:in chdir' from extconf.rb:51:in <main>' extconf failed, exit code 1 Gem files will remain installed in /usr/local/lib/ruby/gems/2.0.0/bundler/gems/rugged-5f1b6d177132 for inspection. Results logged to /usr/local/lib/ruby/gems/2.0.0/bundler/gems/extensions/x86_64-linux/2.0.0-static/rugged-0.19.0/gem_make.out /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:89:in run' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/ext_conf_builder.rb:37:in block in build' /usr/local/lib/ruby/2.0.0/tempfile.rb:324:in open' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/ext_conf_builder.rb:17:in build' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:161:in block (2 levels) in build_extension' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:160:in chdir' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:160:in block in build_extension' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:159:in synchronize' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:159:in build_extension' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:198:in block in build_extensions' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:195:in each' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:195:in build_extensions' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/installer.rb:677:in build_extensions' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/source/path.rb:174:in generate_bin' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/source/git.rb:161:in install' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/installer.rb:111:in block in install_gem_from_spec' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/rubygems_integration.rb:150:in with_build_args' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/installer.rb:110:in install_gem_from_spec' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/installer.rb:265:in block in install_sequentially' /usr/local/lib/ruby/2.0.0/forwardable.rb:171:in each' /usr/local/lib/ruby/2.0.0/forwardable.rb:171:in each' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/installer.rb:264:in install_sequentially' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/installer.rb:97:in run' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/installer.rb:15:in install' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/cli.rb:255:in install' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/vendor/thor/command.rb:27:in run' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/vendor/thor/invocation.rb:121:in invoke_command' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/vendor/thor.rb:363:in dispatch' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/vendor/thor/base.rb:440:in start' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/cli.rb:10:in start' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/bin/bundle:20:in block in <top (required)>' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/friendly_errors.rb:5:in with_friendly_errors' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/bin/bundle:20:in <top (required)>' /usr/local/bin/bundle:23:in load' /usr/local/bin/bundle:23:in <main>' An error occurred while installing rugged (0.19.0), and Bundler cannot continue. Make sure that gem install rugged -v '0.19.0' succeeds before bundling. I did do 'sudo gem install rugged -v '0.19.0'` and it works: [ gitorious ] sudo gem install rugged -v '0.19.0' Building native extensions. This could take a while... Successfully installed rugged-0.19.0 Parsing documentation for rugged-0.19.0 unable to convert "\x80" from ASCII-8BIT to UTF-8 for ../../extensions/x86_64-linux/2.0.0-static/rugged-0.19.0/rugged/rugged.so, skipping unable to convert "\x80" from ASCII-8BIT to UTF-8 for lib/rugged/rugged.so, skipping 1 gem installed I tried 'sudo bundle install --verbose' again but it fails the same way. I then created a /usr/lib/pkgconfig/zlib.pc file and did a setenv of the PKG_CONGIF_PATH to add /usr/lib/pkgconfig. zlib.pc: prefix=/usr exec_prefix=/usr libdir=/usr/lib includedir=/usr/include sharedlibdir=/usr/lib Name: zlib Description: zlib compression library Version: 1.2.3 Requires: Libs: -L${libdir} -L${sharedlibdir} -lz Cflags: -I${includedir} I ran 'sudo bundle install --verbose' and it fails the same way... The 'Gemfile' is in the main gitorious directory. Any suggestions?
bundle install fails on rugged but &#39;gem install rugged&#39; works I am trying to install Gitorious on Redhat 5. I am following these instructions: [URL] One of the steps is: rake db:create RAILS_ENV=production That command fails with the foll error message: [ gitorious ] sudo rake db:create RAILS_ENV=production Could not find libdolt-0.33.14 in any of the sources Run bundle install to install missing gems. When I ran 'bundle install' it fails with: 0: makeup (0.4.4) from /usr/local/lib/ruby/gems/2.0.0/specifications/makeup-0.4.4.gemspec Gem::Ext::BuildError: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for gmake... yes checking for cmake... yes -- cmake .. -DBUILD_CLAR=OFF -DTHREADSAFE=ON -DBUILD_SHARED_LIBS=OFF -DCMAKE_C_FLAGS=-fPIC Package zlib was not found in the pkg-config search path. Perhaps you should add the directory containing zlib.pc' to the PKG_CONFIG_PATH environment variable Package 'zlib', required by 'libgit2', not found -- /usr/bin/gmake *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/local/bin/ruby --with-git2-dir --without-git2-dir --with-git2-include --without-git2-include=${git2-dir}/include --with-git2-lib --without-git2-lib=${git2-dir}/ extconf.rb:16:in sys': ERROR: '/usr/bin/gmake' failed (RuntimeError) from extconf.rb:59:in block (2 levels) in <main>' from extconf.rb:54:in chdir' from extconf.rb:54:in block in <main>' from extconf.rb:51:in chdir' from extconf.rb:51:in <main>' extconf failed, exit code 1 Gem files will remain installed in /usr/local/lib/ruby/gems/2.0.0/bundler/gems/rugged-5f1b6d177132 for inspection. Results logged to /usr/local/lib/ruby/gems/2.0.0/bundler/gems/extensions/x86_64-linux/2.0.0-static/rugged-0.19.0/gem_make.out /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:89:in run' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/ext_conf_builder.rb:37:in block in build' /usr/local/lib/ruby/2.0.0/tempfile.rb:324:in open' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/ext_conf_builder.rb:17:in build' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:161:in block (2 levels) in build_extension' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:160:in chdir' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:160:in block in build_extension' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:159:in synchronize' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:159:in build_extension' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:198:in block in build_extensions' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:195:in each' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/ext/builder.rb:195:in build_extensions' /usr/local/lib/ruby/site_ruby/2.0.0/rubygems/installer.rb:677:in build_extensions' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/source/path.rb:174:in generate_bin' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/source/git.rb:161:in install' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/installer.rb:111:in block in install_gem_from_spec' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/rubygems_integration.rb:150:in with_build_args' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/installer.rb:110:in install_gem_from_spec' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/installer.rb:265:in block in install_sequentially' /usr/local/lib/ruby/2.0.0/forwardable.rb:171:in each' /usr/local/lib/ruby/2.0.0/forwardable.rb:171:in each' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/installer.rb:264:in install_sequentially' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/installer.rb:97:in run' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/installer.rb:15:in install' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/cli.rb:255:in install' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/vendor/thor/command.rb:27:in run' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/vendor/thor/invocation.rb:121:in invoke_command' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/vendor/thor.rb:363:in dispatch' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/vendor/thor/base.rb:440:in start' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/cli.rb:10:in start' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/bin/bundle:20:in block in <top (required)>' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/lib/bundler/friendly_errors.rb:5:in with_friendly_errors' /usr/local/lib/ruby/gems/2.0.0/gems/bundler-1.5.2/bin/bundle:20:in <top (required)>' /usr/local/bin/bundle:23:in load' /usr/local/bin/bundle:23:in <main>' An error occurred while installing rugged (0.19.0), and Bundler cannot continue. Make sure that gem install rugged -v '0.19.0' succeeds before bundling. I did do 'sudo gem install rugged -v '0.19.0'` and it works: [ gitorious ] sudo gem install rugged -v '0.19.0' Building native extensions. This could take a while... Successfully installed rugged-0.19.0 Parsing documentation for rugged-0.19.0 unable to convert "\x80" from ASCII-8BIT to UTF-8 for ../../extensions/x86_64-linux/2.0.0-static/rugged-0.19.0/rugged/rugged.so, skipping unable to convert "\x80" from ASCII-8BIT to UTF-8 for lib/rugged/rugged.so, skipping 1 gem installed I tried 'sudo bundle install --verbose' again but it fails the same way. I then created a /usr/lib/pkgconfig/zlib.pc file and did a setenv of the PKG_CONGIF_PATH to add /usr/lib/pkgconfig. zlib.pc: prefix=/usr exec_prefix=/usr libdir=/usr/lib includedir=/usr/include sharedlibdir=/usr/lib Name: zlib Description: zlib compression library Version: 1.2.3 Requires: Libs: -L${libdir} -L${sharedlibdir} -lz Cflags: -I${includedir} I ran 'sudo bundle install --verbose' and it fails the same way... The 'Gemfile' is in the main gitorious directory. Any suggestions?
ruby-on-rails, redhat, gitorious
2
6,062
2
https://stackoverflow.com/questions/21490051/bundle-install-fails-on-rugged-but-gem-install-rugged-works
18,787,067
Is it possible to add a library path to an init script?
We have an application that runs as a service daemon on a RedHat system. For now, the RPM we have to install this package creates a soft link from our application's library folder into /usr/lib64, and the daemon recognises that. I would like to be able to set the LD_LIBRARY_PATH in the init script (/etc/init.d/myscript) so that we don't need to create that soft link (therefore, if multiple applications that use different versions of the library are installed, they will use what is in their own installation folder, and also we won't mess with the standard lib folders). Is this possible? I tried a simple LD_LIBRARY_PATH=/opt/myapp/lib:/$LD_LIBRARY_PATH but that did not seem to work...
Is it possible to add a library path to an init script? We have an application that runs as a service daemon on a RedHat system. For now, the RPM we have to install this package creates a soft link from our application's library folder into /usr/lib64, and the daemon recognises that. I would like to be able to set the LD_LIBRARY_PATH in the init script (/etc/init.d/myscript) so that we don't need to create that soft link (therefore, if multiple applications that use different versions of the library are installed, they will use what is in their own installation folder, and also we won't mess with the standard lib folders). Is this possible? I tried a simple LD_LIBRARY_PATH=/opt/myapp/lib:/$LD_LIBRARY_PATH but that did not seem to work...
shared-libraries, redhat, ld
2
1,793
1
https://stackoverflow.com/questions/18787067/is-it-possible-to-add-a-library-path-to-an-init-script
15,702,391
Simple char device driver / module - Linux RedHat 8 (2.4.18) on VM segmentation fault after ./module_unload
edit : I fixed the code and turned it to a more compact code regarding memory allocations, everything works now . You might aware me if I'm doing something wrong I'm not sure that the Write&Read implemantations are perfect.... #define ARRAY_LENGTH 128 #define MY_DEVICE "my_device" MODULE_LICENSE("GPL"); MODULE_AUTHOR("Anonymous"); /* globals */ int my_major = 0; /* will hold the major # of my device driver */ int g_index=0; /*index of elements we will act on*/ typedef struct _my_array_elem{ char* string; int size; } my_array_elem; my_array_elem my_array [ARRAY_LENGTH] ; //global array of strings int init_module(void) { int i; //no need to malloc&free for this string? char* our_names = "333333333 \n222222222"; my_major = register_chrdev(0,MY_DEVICE,&my_fops); if (my_major < 0) { printk(KERN_WARNING "can't get dynamic major\n"); return my_major; } my_array[0].string=our_names; my_array[0].size=strlen(our_names); for (i=1; i<ARRAY_LENGTH; i++) { my_array[i].string=NULL; my_array[i].size=-1; } return 0; } void cleanup_module(void) { int i; int ret = unregister_chrdev(my_major, MY_DEVICE); if (ret < 0){ printk("Error in unregister_chrdev: %d\n", ret); } //CHECK!!: do I need to free the names string? (index 0)? for (i=1; i<ARRAY_LENGTH; i++){ kfree(&my_array[i].string); } return; } ssize_t my_read(struct file *filp,char *buf,size_t count,loff_t *f_pos) { int bytes_read = count; if (g_index<0 || g_index>ARRAY_LENGTH-1) { return -EINVAL; //illegal index } if (my_array[g_index].size < count){ bytes_read = my_array[g_index].size; } if (copy_to_user(buf, my_array[g_index].string, bytes_read)!=0){ return -ENOMEM; } return bytes_read; } ssize_t my_write(struct file *filp, const char *buf, size_t count, loff_t *f_pos) { if (g_index<1 || g_index>ARRAY_LENGTH-1){ return -EINVAL; } if ((my_array[g_index].size) != -1){ kfree(&my_array[g_index].string); } char* temp_string=kmalloc(count, GFP_KERNEL); if (temp_string == NULL){ return -ENOMEM; //Out of memory } if (copy_from_user((void*)temp_string, buf, count)){ kfree(temp_string); return -ENOMEM; //Out of memory } my_array[g_index].string=temp_string; my_array[g_index].size=count; return count; }
Simple char device driver / module - Linux RedHat 8 (2.4.18) on VM segmentation fault after ./module_unload edit : I fixed the code and turned it to a more compact code regarding memory allocations, everything works now . You might aware me if I'm doing something wrong I'm not sure that the Write&Read implemantations are perfect.... #define ARRAY_LENGTH 128 #define MY_DEVICE "my_device" MODULE_LICENSE("GPL"); MODULE_AUTHOR("Anonymous"); /* globals */ int my_major = 0; /* will hold the major # of my device driver */ int g_index=0; /*index of elements we will act on*/ typedef struct _my_array_elem{ char* string; int size; } my_array_elem; my_array_elem my_array [ARRAY_LENGTH] ; //global array of strings int init_module(void) { int i; //no need to malloc&free for this string? char* our_names = "333333333 \n222222222"; my_major = register_chrdev(0,MY_DEVICE,&my_fops); if (my_major < 0) { printk(KERN_WARNING "can't get dynamic major\n"); return my_major; } my_array[0].string=our_names; my_array[0].size=strlen(our_names); for (i=1; i<ARRAY_LENGTH; i++) { my_array[i].string=NULL; my_array[i].size=-1; } return 0; } void cleanup_module(void) { int i; int ret = unregister_chrdev(my_major, MY_DEVICE); if (ret < 0){ printk("Error in unregister_chrdev: %d\n", ret); } //CHECK!!: do I need to free the names string? (index 0)? for (i=1; i<ARRAY_LENGTH; i++){ kfree(&my_array[i].string); } return; } ssize_t my_read(struct file *filp,char *buf,size_t count,loff_t *f_pos) { int bytes_read = count; if (g_index<0 || g_index>ARRAY_LENGTH-1) { return -EINVAL; //illegal index } if (my_array[g_index].size < count){ bytes_read = my_array[g_index].size; } if (copy_to_user(buf, my_array[g_index].string, bytes_read)!=0){ return -ENOMEM; } return bytes_read; } ssize_t my_write(struct file *filp, const char *buf, size_t count, loff_t *f_pos) { if (g_index<1 || g_index>ARRAY_LENGTH-1){ return -EINVAL; } if ((my_array[g_index].size) != -1){ kfree(&my_array[g_index].string); } char* temp_string=kmalloc(count, GFP_KERNEL); if (temp_string == NULL){ return -ENOMEM; //Out of memory } if (copy_from_user((void*)temp_string, buf, count)){ kfree(temp_string); return -ENOMEM; //Out of memory } my_array[g_index].string=temp_string; my_array[g_index].size=count; return count; }
c, linux-kernel, linux-device-driver, redhat
2
598
1
https://stackoverflow.com/questions/15702391/simple-char-device-driver-module-linux-redhat-8-2-4-18-on-vm-segmentation
12,785,964
Makefile isn&#39;t rebuilding dependencies?
Fair warning: I'm something of a newb at using makefiles, so this may be something obvious. What I'm trying to do is to use make to run a third-party code generation tool when and only when the source files for that generation tool (call them .abc files) change. I referenced the example at [URL] which shows how to build MD5s, and I tweaked the idea a bit: File: abc.mk target = all files := $(wildcard Abc/*.abc) bltfiles := $files $(addsuffix .built,$files) all: $bltfiles %.built: %.abc %.abc.md5 @echo "Building $*" @ #Command that generates code from a .abc file @touch $@ %.md5: FORCE @echo "Checking $* for changes..." @ #Command to update the .md5 file, if the sum of the .abc file is different FORCE: What I'm intending to happen is for each .abc file to have two auxilary files: .abc.built & .abc.md5 . The .built file is just a dummy target & timestamp for the last time it was built, as the code produced by the generation tool cannot be readily defined as a target. The .md5 file contains a hash of the last known content of the .abc file. It should only be updated when the hash of the file changes. However, the .built file is only created if it doesn't exist. The .md5 rule never runs at all, and the .built rule doesn't re-build even if the .abc file has a newer timestamp. Am I doing something wrong? Update: For posterity, here's the version I got to work: File: abc.mk # Call this makefile as: make all --file=abc.mk # Default Target target = all COMP_ABC_FILES := $(wildcard Abc/*.abc) COMP_BLT_FILES := $(COMP_ABC_FILES) $(addsuffix .built, $(COMP_ABC_FILES) ) # This line is needed to keep make from deleting intermediary output files: .SECONDARY: # Targets: .PHONY: all all: $(COMP_BLT_FILES) Abc/%.abc.built: Abc/%.abc Abc/%.abc.md5 @echo "Building $*" @ #Command that generates code from a .abc file @touch $@ %.md5: FORCE @echo "Checking $* for changes..." @$(if $(filter-out $(shell cat $@ 2>/dev/null),$(shell md5sum $*)),md5sum $* > $@) # Empty rule to force re-build of files: FORCE: clean: @echo "Cleaning .built & .md5 files..." @rm Abc/*.built @rm Abc/*.md5
Makefile isn&#39;t rebuilding dependencies? Fair warning: I'm something of a newb at using makefiles, so this may be something obvious. What I'm trying to do is to use make to run a third-party code generation tool when and only when the source files for that generation tool (call them .abc files) change. I referenced the example at [URL] which shows how to build MD5s, and I tweaked the idea a bit: File: abc.mk target = all files := $(wildcard Abc/*.abc) bltfiles := $files $(addsuffix .built,$files) all: $bltfiles %.built: %.abc %.abc.md5 @echo "Building $*" @ #Command that generates code from a .abc file @touch $@ %.md5: FORCE @echo "Checking $* for changes..." @ #Command to update the .md5 file, if the sum of the .abc file is different FORCE: What I'm intending to happen is for each .abc file to have two auxilary files: .abc.built & .abc.md5 . The .built file is just a dummy target & timestamp for the last time it was built, as the code produced by the generation tool cannot be readily defined as a target. The .md5 file contains a hash of the last known content of the .abc file. It should only be updated when the hash of the file changes. However, the .built file is only created if it doesn't exist. The .md5 rule never runs at all, and the .built rule doesn't re-build even if the .abc file has a newer timestamp. Am I doing something wrong? Update: For posterity, here's the version I got to work: File: abc.mk # Call this makefile as: make all --file=abc.mk # Default Target target = all COMP_ABC_FILES := $(wildcard Abc/*.abc) COMP_BLT_FILES := $(COMP_ABC_FILES) $(addsuffix .built, $(COMP_ABC_FILES) ) # This line is needed to keep make from deleting intermediary output files: .SECONDARY: # Targets: .PHONY: all all: $(COMP_BLT_FILES) Abc/%.abc.built: Abc/%.abc Abc/%.abc.md5 @echo "Building $*" @ #Command that generates code from a .abc file @touch $@ %.md5: FORCE @echo "Checking $* for changes..." @$(if $(filter-out $(shell cat $@ 2>/dev/null),$(shell md5sum $*)),md5sum $* > $@) # Empty rule to force re-build of files: FORCE: clean: @echo "Cleaning .built & .md5 files..." @rm Abc/*.built @rm Abc/*.md5
linux, makefile, dependencies, redhat
2
379
2
https://stackoverflow.com/questions/12785964/makefile-isnt-rebuilding-dependencies
11,795,656
JfreeChart and ValueMarker not displayed (headless environment)
I have a problem while generating a chart. Every part on the chart is well generated except a ValueMarker which is not. I am working on a web application in a headless RedHat environment. I got another problem for the chart generation (which is now solved), the description of my environment is here : JFreeChart strange rendering (headless RedHat) It is working perfectly on Windows. The piece of code adding the ValueMarker is : Marker distanceTiers = new ValueMarker(Double.parseDouble(resultDistance.replace(Constants.UNITE_DISTANCE, ""))); distanceTiers.setPaint(Color.BLACK); plot.addDomainMarker(distanceTiers); Here is what I obtain, I am supposed to get a vertical line at X = 40 and I cannot figure out why everything except this line is going well : If someone has an explanation for this, please do not hesitate.
JfreeChart and ValueMarker not displayed (headless environment) I have a problem while generating a chart. Every part on the chart is well generated except a ValueMarker which is not. I am working on a web application in a headless RedHat environment. I got another problem for the chart generation (which is now solved), the description of my environment is here : JFreeChart strange rendering (headless RedHat) It is working perfectly on Windows. The piece of code adding the ValueMarker is : Marker distanceTiers = new ValueMarker(Double.parseDouble(resultDistance.replace(Constants.UNITE_DISTANCE, ""))); distanceTiers.setPaint(Color.BLACK); plot.addDomainMarker(distanceTiers); Here is what I obtain, I am supposed to get a vertical line at X = 40 and I cannot figure out why everything except this line is going well : If someone has an explanation for this, please do not hesitate.
jfreechart, redhat, marker, headless
2
696
1
https://stackoverflow.com/questions/11795656/jfreechart-and-valuemarker-not-displayed-headless-environment
9,507,484
What does Oracle JDK 7 and JRE 7 Certified System Configurations really mean?
Oracle Java 7 has a list of certified platform [URL] popular server Operating systems such as debian and ubuntu are not certified. I have downloaded the jdk-7u3-linux-x64.tar.gz and it seems to run on Ubuntu should I be concerned about running Oracle Java 7 on a non certified Oracle platform for production? Is this certified platforms list just a marketing thing or is some technical reason why Oracle Java 7 would run differently on Redhat vs. Ubuntu?
What does Oracle JDK 7 and JRE 7 Certified System Configurations really mean? Oracle Java 7 has a list of certified platform [URL] popular server Operating systems such as debian and ubuntu are not certified. I have downloaded the jdk-7u3-linux-x64.tar.gz and it seems to run on Ubuntu should I be concerned about running Oracle Java 7 on a non certified Oracle platform for production? Is this certified platforms list just a marketing thing or is some technical reason why Oracle Java 7 would run differently on Redhat vs. Ubuntu?
java, ubuntu, jvm, redhat, java-7
2
462
2
https://stackoverflow.com/questions/9507484/what-does-oracle-jdk-7-and-jre-7-certified-system-configurations-really-mean
4,147,276
Multiple Process InitScript Logic
I am developing initscripts for some of our software, and am having difficulty deciding how I should use it for a particular piece. We have homegrown software responsible for passing data around out network, it's built on a standard pubsub model. There is a publisher process (two, actually, for two different use cases), a broker process, and a subscriber process). Any combination of these processes, and even multiple of the same process, can run simultaneously on a given box. I'm having trouble deciding how best to allow this to be configured. Since it can vary from box to box, that will likely go into /etc/sysconfig/pubsub which will be read in by the initscript. The only things I will have to allow to be configured is (1) the process name, which is one of log_publish, dir_publish, broker, subscribe, and (2) the configuration file that corresponds to that particular process. I wish to avoid telling people how to modify the initscript per box in order to change the list of running processes, so this unique configuration file per box is the best way I can come up with to accomplish that. I assume this also means that I will have to have some kind of unique identifier per process on the box, as I intend to use the touch /var/lock/subsys/* method that most RedHat initscripts use already to lock a process from running twice. Knowing this, I know the identifier can't always be random, otherwise it will never be effective in order to prevent duplicate processes with the same configuration file (because, again, I need to be able to run multiple processes with different configuration files). I have no idea how best to represent this in configuration.
Multiple Process InitScript Logic I am developing initscripts for some of our software, and am having difficulty deciding how I should use it for a particular piece. We have homegrown software responsible for passing data around out network, it's built on a standard pubsub model. There is a publisher process (two, actually, for two different use cases), a broker process, and a subscriber process). Any combination of these processes, and even multiple of the same process, can run simultaneously on a given box. I'm having trouble deciding how best to allow this to be configured. Since it can vary from box to box, that will likely go into /etc/sysconfig/pubsub which will be read in by the initscript. The only things I will have to allow to be configured is (1) the process name, which is one of log_publish, dir_publish, broker, subscribe, and (2) the configuration file that corresponds to that particular process. I wish to avoid telling people how to modify the initscript per box in order to change the list of running processes, so this unique configuration file per box is the best way I can come up with to accomplish that. I assume this also means that I will have to have some kind of unique identifier per process on the box, as I intend to use the touch /var/lock/subsys/* method that most RedHat initscripts use already to lock a process from running twice. Knowing this, I know the identifier can't always be random, otherwise it will never be effective in order to prevent duplicate processes with the same configuration file (because, again, I need to be able to run multiple processes with different configuration files). I have no idea how best to represent this in configuration.
linux, redhat
2
692
2
https://stackoverflow.com/questions/4147276/multiple-process-initscript-logic
79,713,855
How to run vscode-extension-tester on compiled source (instead of .vsix)
What is the correct process for running vscode-extension-tester (the project's official example) against the project's compiled source code (as opposed to having to build a .vsix package before running the tests). ( [URL] ) I cloned vscode-extension-tester-example , then ran the following commands: npm install npm run compile npx extester get-vscode npx extester get-chromedriver EXTENSION_DEV_PATH=$(pwd) npx extest run-tests './out/ui-test/*.test.js' --code_version max --code_settings settings.json When running this way, some of the tests fail, and if you watch carefully, you can see that the Hello World and WebView Test commands are not available. Furthermore, the console.log statement inside the extension's activate function does not appear in the log file (which it does if you launch the tests using a .vsix file.) The extension works as expected if I launch it in debug mode. Am I using EXTENSION_DEV_PATH wrong, or have I found a bug? I just opened a bug report ( [URL] ). So, if it is a user error, please let me know so I can delete it and not waste the developers' time.
How to run vscode-extension-tester on compiled source (instead of .vsix) What is the correct process for running vscode-extension-tester (the project's official example) against the project's compiled source code (as opposed to having to build a .vsix package before running the tests). ( [URL] ) I cloned vscode-extension-tester-example , then ran the following commands: npm install npm run compile npx extester get-vscode npx extester get-chromedriver EXTENSION_DEV_PATH=$(pwd) npx extest run-tests './out/ui-test/*.test.js' --code_version max --code_settings settings.json When running this way, some of the tests fail, and if you watch carefully, you can see that the Hello World and WebView Test commands are not available. Furthermore, the console.log statement inside the extension's activate function does not appear in the log file (which it does if you launch the tests using a .vsix file.) The extension works as expected if I launch it in debug mode. Am I using EXTENSION_DEV_PATH wrong, or have I found a bug? I just opened a bug report ( [URL] ). So, if it is a user error, please let me know so I can delete it and not waste the developers' time.
automated-tests, vscode-extensions, redhat, vscode-extension-tester
2
52
0
https://stackoverflow.com/questions/79713855/how-to-run-vscode-extension-tester-on-compiled-source-instead-of-vsix
78,640,710
pam_prompt() Giving Conversation failed RHEL 9.4 when used with SSH
I have build a custom pam module to add MFA. After entering password I have used pam_prompt() function to display options for MFA and take user input. That pam_prompt() is returning code 19 (PAM_CONV_ERR) "Conversation Failed" with SSH. Same function when used by UI to Display MFA list is working fine. Any idea of it. Same is working on RHEL 8. I am currently using RHEL 9.4 Code where I am sing pam_prompt function int pam_result = pam_prompt(pamh, PAM_PROMPT_ECHO_ON, &p, "%s", prompt); if (pam_result != PAM_SUCCESS) { sprintf(msg, "[ERROR] pam_prompt failed with code: %d", pam_result); debug(pamh, msg); return pam_result; } my sshd file on test machine #%PAM-1.0 auth substack password-auth auth include postlogin account required pam_sepermit.so account required pam_nologin.so account include password-auth password include password-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session required pam_loginuid.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open env_params session required pam_namespace.so session optional pam_keyinit.so force revoke session optional pam_motd.so session include password-auth session include postlogin auth required pam_otp.so config=/etc/pam_otp.conf use_first_pass Usecase: After installing my custom pam module by adding it at the end of sshd file "auth required pam_otp.sp". When I take ssh using "ssh username@ip" it prompts for password. After entering the password control goes to my pam module for MFA. Now next it should show the list of MFA options available. For that I have used the above code that has pam_prompt() function. That is giving above mentioned error. This problem has arrived only with RHEL 9. Till 8 its working fine
pam_prompt() Giving Conversation failed RHEL 9.4 when used with SSH I have build a custom pam module to add MFA. After entering password I have used pam_prompt() function to display options for MFA and take user input. That pam_prompt() is returning code 19 (PAM_CONV_ERR) "Conversation Failed" with SSH. Same function when used by UI to Display MFA list is working fine. Any idea of it. Same is working on RHEL 8. I am currently using RHEL 9.4 Code where I am sing pam_prompt function int pam_result = pam_prompt(pamh, PAM_PROMPT_ECHO_ON, &p, "%s", prompt); if (pam_result != PAM_SUCCESS) { sprintf(msg, "[ERROR] pam_prompt failed with code: %d", pam_result); debug(pamh, msg); return pam_result; } my sshd file on test machine #%PAM-1.0 auth substack password-auth auth include postlogin account required pam_sepermit.so account required pam_nologin.so account include password-auth password include password-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session required pam_loginuid.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open env_params session required pam_namespace.so session optional pam_keyinit.so force revoke session optional pam_motd.so session include password-auth session include postlogin auth required pam_otp.so config=/etc/pam_otp.conf use_first_pass Usecase: After installing my custom pam module by adding it at the end of sshd file "auth required pam_otp.sp". When I take ssh using "ssh username@ip" it prompts for password. After entering the password control goes to my pam module for MFA. Now next it should show the list of MFA options available. For that I have used the above code that has pam_prompt() function. That is giving above mentioned error. This problem has arrived only with RHEL 9. Till 8 its working fine
linux, ubuntu, ssh, redhat, pam
2
246
0
https://stackoverflow.com/questions/78640710/pam-prompt-giving-conversation-failed-rhel-9-4-when-used-with-ssh
78,507,680
Red Hat Dependency Analytics Command failed
I wanted to see dependency analytics report of my maven project, but it said 'Unable to analyze your stack.' and an error message popped up as follows. Command failed: mvn -q clean -f /my-project/pom.xml Source: Red Hat Dependency Analytics I also added Red Hat's generally available (GA) repository to my pom.xml as written in extension details as follows. <repositories> <repository> <id>redhat-ga</id> <url>[URL] </repository> </repositories> What shall I do?
Red Hat Dependency Analytics Command failed I wanted to see dependency analytics report of my maven project, but it said 'Unable to analyze your stack.' and an error message popped up as follows. Command failed: mvn -q clean -f /my-project/pom.xml Source: Red Hat Dependency Analytics I also added Red Hat's generally available (GA) repository to my pom.xml as written in extension details as follows. <repositories> <repository> <id>redhat-ga</id> <url>[URL] </repository> </repositories> What shall I do?
visual-studio-code, redhat, dependency-analysis
2
974
1
https://stackoverflow.com/questions/78507680/red-hat-dependency-analytics-command-failed
77,870,014
Setting NFS Path on java/spring-boot system
I want to show or download the following txt file on system developed in Java/Spring Boot: The code that i am using to upload the files in the system on the nfs server is this: public static String uploadFileVNP = System.getProperty("user.dir")+"/Dev/nfs/sopptFiles/VNP"; @PostMapping("/process_svnp") @PreAuthorize("hasAnyAuthority('MSA_USER', 'MSA_USER_ADMINISTRATIVE', 'MSA_MODERATOR', 'ADMIN')") public String msSoppProcesoRegistroVNP(SVNP svnp, String date, String id, String date_ic, String date_fc, @RequestParam("fileVNP") MultipartFile[] fileVNP) { if(svnpRepository.existsRecordSVNP(date, id, date_ic, date_fc)) { return "redirect:/msa?filter=home?error"; } StringBuilder fileNames = new StringBuilder(); for(MultipartFile file : fileVNP) { Path fileNameAndPath = Paths.get(uploadFileVNP, file.getOriginalFilename()); fileNames.append(file.getOriginalFilename()); try { Files.write(fileNameAndPath, file.getBytes()); } catch (IOException e) { e.printStackTrace(); } } svnpService.registerSVNP(svnp); return "redirect:/msa?filter=Home?success"; } The code that i am using to list the uploaded files on the nfs is this: $("#refFileVacTarget").text(svnp.refFileVac); let link="/VNP/" + svnp.refFileVac; $('#refFileVacTarget').attr('href', link); It is important to aim that this code works in the local server, but when i deploy on git repository it doesn´t work anymore. With all this information there are three precise questions. 1.How can i access to the redhat nfs path from the system? (/Dev/nfs/sopptFiles/) 2.How can i set the path to get or upload files?
Setting NFS Path on java/spring-boot system I want to show or download the following txt file on system developed in Java/Spring Boot: The code that i am using to upload the files in the system on the nfs server is this: public static String uploadFileVNP = System.getProperty("user.dir")+"/Dev/nfs/sopptFiles/VNP"; @PostMapping("/process_svnp") @PreAuthorize("hasAnyAuthority('MSA_USER', 'MSA_USER_ADMINISTRATIVE', 'MSA_MODERATOR', 'ADMIN')") public String msSoppProcesoRegistroVNP(SVNP svnp, String date, String id, String date_ic, String date_fc, @RequestParam("fileVNP") MultipartFile[] fileVNP) { if(svnpRepository.existsRecordSVNP(date, id, date_ic, date_fc)) { return "redirect:/msa?filter=home?error"; } StringBuilder fileNames = new StringBuilder(); for(MultipartFile file : fileVNP) { Path fileNameAndPath = Paths.get(uploadFileVNP, file.getOriginalFilename()); fileNames.append(file.getOriginalFilename()); try { Files.write(fileNameAndPath, file.getBytes()); } catch (IOException e) { e.printStackTrace(); } } svnpService.registerSVNP(svnp); return "redirect:/msa?filter=Home?success"; } The code that i am using to list the uploaded files on the nfs is this: $("#refFileVacTarget").text(svnp.refFileVac); let link="/VNP/" + svnp.refFileVac; $('#refFileVacTarget').attr('href', link); It is important to aim that this code works in the local server, but when i deploy on git repository it doesn´t work anymore. With all this information there are three precise questions. 1.How can i access to the redhat nfs path from the system? (/Dev/nfs/sopptFiles/) 2.How can i set the path to get or upload files?
java, spring-boot, openshift, redhat
2
364
0
https://stackoverflow.com/questions/77870014/setting-nfs-path-on-java-spring-boot-system
76,098,932
How can I configure SSL for a k6 client using a self-signed certificate on Redhat Linux?
I have tried to follow the documentation and used following code snippet: tlsAuth: [ { domains: ['example.com'], cert: open('./mycert.pem'), key: open('./mycert-key.pem'), } However, I am having difficulty getting this to work. Can you provide some guidance on the necessary steps to configure SSL with a self-signed certificate for a k6 client on Redhat Linux?
How can I configure SSL for a k6 client using a self-signed certificate on Redhat Linux? I have tried to follow the documentation and used following code snippet: tlsAuth: [ { domains: ['example.com'], cert: open('./mycert.pem'), key: open('./mycert-key.pem'), } However, I am having difficulty getting this to work. Can you provide some guidance on the necessary steps to configure SSL with a self-signed certificate for a k6 client on Redhat Linux?
linux, spring-boot, ssl, redhat, k6
2
722
0
https://stackoverflow.com/questions/76098932/how-can-i-configure-ssl-for-a-k6-client-using-a-self-signed-certificate-on-redha
74,823,773
Create Mutex throws Error The system cannot open the device or file specified
I have a dotnet WebApi application which is running on OpenShift (RHEL). To build the application I am using source2image strategy with the RedHat Image. [URL] [URL] After updating to .net 6.0-22 I am getting following error when I am trying to create a Mutex: System.IO.IOException: The system cannot open the device or file specified. : 'MutexName' at System.Threading.Mutex.CreateMutexCore(Boolean initiallyOwned, String name, Boolean& createdNew) at System.Threading.Mutex..ctor(Boolean initiallyOwned, String name) at Network.Services.GitService.AddSchedule(String region, String site, ScheduleConfigurationViewModel configuration, String commitMessage) in /opt/app-root/src/Src/Infrastructure/Network/Services/GitService.cs:line 122 at WebApi.Controllers.ConfigurationController.Post(String assetId, String commitMessage) in /opt/app-root/src/Src/WebApi/Controllers/ConfigurationController.cs:line 236 at lambda_method259(Closure, Object) at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.TaskOfActionResultExecutor.Execute(ActionContext actionContext, IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.g__Awaited|12_0(ControllerActionInvoker invoker, ValueTask 1 actionResultValueTask) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeNextActionFilterAsync>g__Awaited|10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeInnerFilterAsync>g__Awaited|13_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResourceFilter>g__Awaited|25_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResourceExecutedContextSealed context) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|20_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope) at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger) at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context) at WebApi.Common.CustomExceptionHandlerMiddleware.InvokeAsync(HttpContext context, ILogger 1 logger, ICurrentUserService currentUserService) in /opt/app-root/src/Src/WebApi/Common/CustomExceptionHandlerMiddleware.cs:line 33 at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context) at Serilog.AspNetCore.RequestLoggingMiddleware.Invoke(HttpContext httpContext) at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddlewareImpl.Invoke(HttpContext context) Code Fragment where the error is thrown: public void AddSchedule(string region, string site, ScheduleConfigurationViewModel configuration, string commitMessage) { using var mutex = new Mutex(false, MutexName); var hasHandle = false; try { try { hasHandle = mutex.WaitOne(new TimeSpan(0, 0, 0, 5), false); } catch (AbandonedMutexException) { // Log the fact that the mutex was abandoned in another process, // it will still get acquired hasHandle = true; } var repoDir = Clone(); Pull(); WriteScheduleFile(repoDir, configuration); AddAll(); Commit(commitMessage); Push(); } finally { if (hasHandle) mutex.ReleaseMutex(); } } Edit Below you can see the permissions of the mentioned folders. There is a session folder where I don't have rights. Is this the problem? ls -la / ls -la /tmp ls -la /tmp/.dotnet/shm Rights before the update: ls -la /tmp/.dotnet/shm Workaround Add .s2i/bin folder to the source repo with an assemble file with following content. This will delete the session folder from the build process. #!/bin/bash echo "Before assembling" /usr/libexec/s2i/assemble rc=$? if [ $rc -eq 0 ]; then echo "After successful assembling" echo "Delete /tmp/.dotnet/shm/*" rm -rf /tmp/.dotnet/shm/* else echo "After failed assembling" fi exit $rc
Create Mutex throws Error The system cannot open the device or file specified I have a dotnet WebApi application which is running on OpenShift (RHEL). To build the application I am using source2image strategy with the RedHat Image. [URL] [URL] After updating to .net 6.0-22 I am getting following error when I am trying to create a Mutex: System.IO.IOException: The system cannot open the device or file specified. : 'MutexName' at System.Threading.Mutex.CreateMutexCore(Boolean initiallyOwned, String name, Boolean& createdNew) at System.Threading.Mutex..ctor(Boolean initiallyOwned, String name) at Network.Services.GitService.AddSchedule(String region, String site, ScheduleConfigurationViewModel configuration, String commitMessage) in /opt/app-root/src/Src/Infrastructure/Network/Services/GitService.cs:line 122 at WebApi.Controllers.ConfigurationController.Post(String assetId, String commitMessage) in /opt/app-root/src/Src/WebApi/Controllers/ConfigurationController.cs:line 236 at lambda_method259(Closure, Object) at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.TaskOfActionResultExecutor.Execute(ActionContext actionContext, IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.g__Awaited|12_0(ControllerActionInvoker invoker, ValueTask 1 actionResultValueTask) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeNextActionFilterAsync>g__Awaited|10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeInnerFilterAsync>g__Awaited|13_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResourceFilter>g__Awaited|25_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResourceExecutedContextSealed context) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|20_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope) at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger) at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context) at WebApi.Common.CustomExceptionHandlerMiddleware.InvokeAsync(HttpContext context, ILogger 1 logger, ICurrentUserService currentUserService) in /opt/app-root/src/Src/WebApi/Common/CustomExceptionHandlerMiddleware.cs:line 33 at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context) at Serilog.AspNetCore.RequestLoggingMiddleware.Invoke(HttpContext httpContext) at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddlewareImpl.Invoke(HttpContext context) Code Fragment where the error is thrown: public void AddSchedule(string region, string site, ScheduleConfigurationViewModel configuration, string commitMessage) { using var mutex = new Mutex(false, MutexName); var hasHandle = false; try { try { hasHandle = mutex.WaitOne(new TimeSpan(0, 0, 0, 5), false); } catch (AbandonedMutexException) { // Log the fact that the mutex was abandoned in another process, // it will still get acquired hasHandle = true; } var repoDir = Clone(); Pull(); WriteScheduleFile(repoDir, configuration); AddAll(); Commit(commitMessage); Push(); } finally { if (hasHandle) mutex.ReleaseMutex(); } } Edit Below you can see the permissions of the mentioned folders. There is a session folder where I don't have rights. Is this the problem? ls -la / ls -la /tmp ls -la /tmp/.dotnet/shm Rights before the update: ls -la /tmp/.dotnet/shm Workaround Add .s2i/bin folder to the source repo with an assemble file with following content. This will delete the session folder from the build process. #!/bin/bash echo "Before assembling" /usr/libexec/s2i/assemble rc=$? if [ $rc -eq 0 ]; then echo "After successful assembling" echo "Delete /tmp/.dotnet/shm/*" rm -rf /tmp/.dotnet/shm/* else echo "After failed assembling" fi exit $rc
c#, .net, openshift, redhat
2
945
0
https://stackoverflow.com/questions/74823773/create-mutex-throws-error-the-system-cannot-open-the-device-or-file-specified
74,581,859
Why is my Visual Studio Code &#39;workspaceStorage&#39; directory so large?
When I browsed to /../Code/User/WorkspaceStorage/../redhat.java , I noticed that the directory appears to have a copy of my whole desktop folder contained within it. I was wondering if it was safe to delete it, what might have caused this, and how/if I can prevent this from happening again. These are the Java extensions I have installed: Debugger for Java Extension Pack for Java Project Manager for Java Test Runner for Java Language Support for Java by Red Hat Linux I also have some C++ extensions. I've figured out that it might be that Language Support for Java saved my desktop files to see any changes in the workspace since I've previously saved some .java files on my desktop. I'm on macOS v10.14 (Mojave).
Why is my Visual Studio Code &#39;workspaceStorage&#39; directory so large? When I browsed to /../Code/User/WorkspaceStorage/../redhat.java , I noticed that the directory appears to have a copy of my whole desktop folder contained within it. I was wondering if it was safe to delete it, what might have caused this, and how/if I can prevent this from happening again. These are the Java extensions I have installed: Debugger for Java Extension Pack for Java Project Manager for Java Test Runner for Java Language Support for Java by Red Hat Linux I also have some C++ extensions. I've figured out that it might be that Language Support for Java saved my desktop files to see any changes in the workspace since I've previously saved some .java files on my desktop. I'm on macOS v10.14 (Mojave).
java, visual-studio-code, redhat
2
573
0
https://stackoverflow.com/questions/74581859/why-is-my-visual-studio-code-workspacestorage-directory-so-large
74,022,196
How do I install OpenJDK on Redhat Enterprise Linux on Azure and AWS environments?
How do I install OpenJDK on Redhat Enterprise Linux on Azure and AWS environments? I am a beginner server infrastructure engineer. I have installed OpenJDK17 on Redhat Enterprise Linux on Azure and AWS environment, and I get the following error. $ sudo dnf install java-17-openjdk No match for argument: java-17-openjdk Error: Unable to find a match: java-17-openjdk [URL] If you install OpenJDK11, the installation is successful. For RHEL, is it possible to install JDK17 without a Redhat subscription? Do I need to change my OS to Ubuntsu or CentOS to install JDK17? Translated with www.DeepL.com/Translator (free version)
How do I install OpenJDK on Redhat Enterprise Linux on Azure and AWS environments? How do I install OpenJDK on Redhat Enterprise Linux on Azure and AWS environments? I am a beginner server infrastructure engineer. I have installed OpenJDK17 on Redhat Enterprise Linux on Azure and AWS environment, and I get the following error. $ sudo dnf install java-17-openjdk No match for argument: java-17-openjdk Error: Unable to find a match: java-17-openjdk [URL] If you install OpenJDK11, the installation is successful. For RHEL, is it possible to install JDK17 without a Redhat subscription? Do I need to change my OS to Ubuntsu or CentOS to install JDK17? Translated with www.DeepL.com/Translator (free version)
linux, redhat
2
5,812
0
https://stackoverflow.com/questions/74022196/how-do-i-install-openjdk-on-redhat-enterprise-linux-on-azure-and-aws-environment
73,622,758
kafka crashing when migrating from redhat 7 to redhat 8
I am facing this issue when migrating from Redhat7 to Redhat8 and can't find out why ? I checked the code and everything's fine! configuring ... starting ... endpointUrl = opc.tcp://LBA:9681/POO.S2K/OPCUA/DataAccess == Info: Trying 10.119.0.247:8090... == Info: TCP_NODELAY set %2|1662467686.094|THREAD|rdkafka#consumer-1| [thrd:app]: Unable to create broker thread: Operation now in progress (115) %3|1662467686.094|ERROR|rdkafka#consumer-1| [thrd:app]: Unable to create broker thread: Operation now in progress (115) nbdtmdp: /home/deploy/nbdtmdp/build/src/librdkafka/librdkafka-prefix/src/librdkafka/src/rdkafka_broker.c:4773: rd_kafka_broker_add_logical: Assertion `rkb && *"failed to create broker thread"' failed. ./nbdtmdp_start.sh: line 7: 110235 Aborted (core dumped) ./nbdtmdp ../config/nbdtmdpCfg.txt
kafka crashing when migrating from redhat 7 to redhat 8 I am facing this issue when migrating from Redhat7 to Redhat8 and can't find out why ? I checked the code and everything's fine! configuring ... starting ... endpointUrl = opc.tcp://LBA:9681/POO.S2K/OPCUA/DataAccess == Info: Trying 10.119.0.247:8090... == Info: TCP_NODELAY set %2|1662467686.094|THREAD|rdkafka#consumer-1| [thrd:app]: Unable to create broker thread: Operation now in progress (115) %3|1662467686.094|ERROR|rdkafka#consumer-1| [thrd:app]: Unable to create broker thread: Operation now in progress (115) nbdtmdp: /home/deploy/nbdtmdp/build/src/librdkafka/librdkafka-prefix/src/librdkafka/src/rdkafka_broker.c:4773: rd_kafka_broker_add_logical: Assertion `rkb && *"failed to create broker thread"' failed. ./nbdtmdp_start.sh: line 7: 110235 Aborted (core dumped) ./nbdtmdp ../config/nbdtmdpCfg.txt
apache-kafka, kafka-consumer-api, redhat
2
619
2
https://stackoverflow.com/questions/73622758/kafka-crashing-when-migrating-from-redhat-7-to-redhat-8
72,642,713
hg clone tag or branch
We are using the following command to clone repositories; hg clone -u v1.0 \\server\abc\def my_repo If the repository contains both a tag and branch that is named "v1.0" then the command will update to the tag revision. In this situation, how can we force the hg clone command to select the branch? We do not want to use the --branch option because some repositories may only contain the tag and not the branch. Tried various combinations of this command (with single and double quotes) and they all reported "unknown revision" hg clone -u branch(v1.0) \\server\abc\def my_repo hg clone -u "branch(v1.0)" \\server\abc\def my_repo hg clone -u 'branch(v1.0)' \\server\abc\def my_repo Does hg clone support using revset? Please advise :-)
hg clone tag or branch We are using the following command to clone repositories; hg clone -u v1.0 \\server\abc\def my_repo If the repository contains both a tag and branch that is named "v1.0" then the command will update to the tag revision. In this situation, how can we force the hg clone command to select the branch? We do not want to use the --branch option because some repositories may only contain the tag and not the branch. Tried various combinations of this command (with single and double quotes) and they all reported "unknown revision" hg clone -u branch(v1.0) \\server\abc\def my_repo hg clone -u "branch(v1.0)" \\server\abc\def my_repo hg clone -u 'branch(v1.0)' \\server\abc\def my_repo Does hg clone support using revset? Please advise :-)
windows, version-control, mercurial, redhat
2
270
0
https://stackoverflow.com/questions/72642713/hg-clone-tag-or-branch
72,605,790
Why do cron commands inside a docker container show up in log but don&#39;t actually run?
Dockerfile FROM almalinux:8 # [... supervisord setup ...] RUN dnf install -y \ crontabs RUN sed -ri '/-session(\s+)optional(\s+)pam_systemd.so/d' /etc/pam.d/system-auth && \ sed -ri '/^[^#]/ s/systemd//g' /etc/nsswitch.conf COPY $TEMPLATE_DIR/supervisord/crond.conf /etc/supervisord.d/crond.conf crond.conf [program:crond] command=/usr/sbin/crond -nsm off stdout_logfile_maxbytes=0 stdout_logfile=/dev/stdout stderr_logfile=/dev/stderr stderr_logfile_maxbytes=0 syslog: 2022-06-13 18:39:07,939 INFO success: crond entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) crontab -e * * * * * touch /var/www/html/test.txt syslog: Jun 13 18:57:59 e60d29fd100e crontab[331]: (root) BEGIN EDIT (root) Jun 13 18:58:15 e60d29fd100e crontab[331]: (root) REPLACE (root) Jun 13 18:58:15 e60d29fd100e crontab[331]: (root) END EDIT (root) Jun 13 18:59:01 e60d29fd100e CROND[334]: (root) CMD (touch /var/www/html/test.txt) I thought the file is never touched... tried also echo , running an absolute path command... nothing. But after waiting a little (longer) and running cron with debug flags on, it seems the command does get run, but with something like a 5s to 50s delay: load_entry()...about to parse command 2022-06-13T16:17:01.352923611Z linenum=21 2022-06-13T16:17:01.352929349Z load_entry()...returning successfully 2022-06-13T16:17:01.352934854Z ...load_user() done 2022-06-13T16:17:01.352940748Z unlinking old database: 2022-06-13T16:17:01.352960040Z check_inotify_database is done 2022-06-13T16:17:01.352966537Z user [root:0:0:...] cmd="touch /var/www/html/test.txt" 2022-06-13T16:17:01.352972511Z [10] do_command(touch /var/www/html/test.txt, (root,0,0)) 2022-06-13T16:17:01.352979069Z [10] main process returning to work 2022-06-13T16:17:01.352984792Z The huge delay seems to also pile up what I presume are queued commands to run, and the pile grows forever larger: root 335 0.0 0.0 69708 5188 ? S 19:15 0:00 /usr/sbin/CROND -nsm off -x ext,sch,proc,pars,load,misc root 336 94.4 0.0 69708 1440 ? Rs 19:15 8:52 /usr/sbin/CROND -nsm off -x ext,sch,proc,pars,load,misc ... multiply 10x the 2 processes above after a few minutes ... Any clues why the huge delay and weird behavior? Disabling inotify ( -i ) on crond does not improve things... I'm thinking maybe a time skew issue?
Why do cron commands inside a docker container show up in log but don&#39;t actually run? Dockerfile FROM almalinux:8 # [... supervisord setup ...] RUN dnf install -y \ crontabs RUN sed -ri '/-session(\s+)optional(\s+)pam_systemd.so/d' /etc/pam.d/system-auth && \ sed -ri '/^[^#]/ s/systemd//g' /etc/nsswitch.conf COPY $TEMPLATE_DIR/supervisord/crond.conf /etc/supervisord.d/crond.conf crond.conf [program:crond] command=/usr/sbin/crond -nsm off stdout_logfile_maxbytes=0 stdout_logfile=/dev/stdout stderr_logfile=/dev/stderr stderr_logfile_maxbytes=0 syslog: 2022-06-13 18:39:07,939 INFO success: crond entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) crontab -e * * * * * touch /var/www/html/test.txt syslog: Jun 13 18:57:59 e60d29fd100e crontab[331]: (root) BEGIN EDIT (root) Jun 13 18:58:15 e60d29fd100e crontab[331]: (root) REPLACE (root) Jun 13 18:58:15 e60d29fd100e crontab[331]: (root) END EDIT (root) Jun 13 18:59:01 e60d29fd100e CROND[334]: (root) CMD (touch /var/www/html/test.txt) I thought the file is never touched... tried also echo , running an absolute path command... nothing. But after waiting a little (longer) and running cron with debug flags on, it seems the command does get run, but with something like a 5s to 50s delay: load_entry()...about to parse command 2022-06-13T16:17:01.352923611Z linenum=21 2022-06-13T16:17:01.352929349Z load_entry()...returning successfully 2022-06-13T16:17:01.352934854Z ...load_user() done 2022-06-13T16:17:01.352940748Z unlinking old database: 2022-06-13T16:17:01.352960040Z check_inotify_database is done 2022-06-13T16:17:01.352966537Z user [root:0:0:...] cmd="touch /var/www/html/test.txt" 2022-06-13T16:17:01.352972511Z [10] do_command(touch /var/www/html/test.txt, (root,0,0)) 2022-06-13T16:17:01.352979069Z [10] main process returning to work 2022-06-13T16:17:01.352984792Z The huge delay seems to also pile up what I presume are queued commands to run, and the pile grows forever larger: root 335 0.0 0.0 69708 5188 ? S 19:15 0:00 /usr/sbin/CROND -nsm off -x ext,sch,proc,pars,load,misc root 336 94.4 0.0 69708 1440 ? Rs 19:15 8:52 /usr/sbin/CROND -nsm off -x ext,sch,proc,pars,load,misc ... multiply 10x the 2 processes above after a few minutes ... Any clues why the huge delay and weird behavior? Disabling inotify ( -i ) on crond does not improve things... I'm thinking maybe a time skew issue?
docker, cron, redhat
2
434
0
https://stackoverflow.com/questions/72605790/why-do-cron-commands-inside-a-docker-container-show-up-in-log-but-dont-actually
72,157,157
Moving tftpboot folder
Ok, so I am trying to move the /var/lib/tftpboot folder the "proper" way to a dedicated partition. To accomplish this goal I have setup a separate partition called /app and moved the tftpboot folder there. Issue 1: Symlink After I moved the folder I created a symlink from the new directory to the old directory using the ln -s /app/tftpboot /var/lib/ command. After doing this I am unable to successfully restart the service using systemctl restart tftp . However, if I just update the path listed in the service file and the config file the service boots fine.
Moving tftpboot folder Ok, so I am trying to move the /var/lib/tftpboot folder the "proper" way to a dedicated partition. To accomplish this goal I have setup a separate partition called /app and moved the tftpboot folder there. Issue 1: Symlink After I moved the folder I created a symlink from the new directory to the old directory using the ln -s /app/tftpboot /var/lib/ command. After doing this I am unable to successfully restart the service using systemctl restart tftp . However, if I just update the path listed in the service file and the config file the service boots fine.
centos, redhat
2
145
0
https://stackoverflow.com/questions/72157157/moving-tftpboot-folder
72,102,086
How to create logic for back button on ftl page for custom authentication flow pages in KeyCloak?
How to do logic for back button on ftl page for custom authentication flow pages in Keycloak? I am using spring boot framework for creating jar for KeyCloak. I am trying to create custom authentication flow by KeyCloak. I have created custom login pages where user will use credential user-password for pre login page; for post login flow, I have created custom .ftl pages for authentication like user info validation, terms and conditions page. In that post login pages, I like to create back button on that pages to go previous steps on that authentication flow. How this logic we can create it? I do not see out of box feature from KeyCloak which provides back button functionalities.
How to create logic for back button on ftl page for custom authentication flow pages in KeyCloak? How to do logic for back button on ftl page for custom authentication flow pages in Keycloak? I am using spring boot framework for creating jar for KeyCloak. I am trying to create custom authentication flow by KeyCloak. I have created custom login pages where user will use credential user-password for pre login page; for post login flow, I have created custom .ftl pages for authentication like user info validation, terms and conditions page. In that post login pages, I like to create back button on that pages to go previous steps on that authentication flow. How this logic we can create it? I do not see out of box feature from KeyCloak which provides back button functionalities.
java, spring-boot, keycloak, redhat, redhat-sso
2
359
0
https://stackoverflow.com/questions/72102086/how-to-create-logic-for-back-button-on-ftl-page-for-custom-authentication-flow-p
71,336,409
How to install gettext-devel in redhat/ubi8
I have a docker file where I want to install gettext-devel FROM redhat/ubi8 RUN yum install gettext-devel which does not work: Updating Subscription Management repositories. Unable to read consumer identity This system is not registered with an entitlement server. You can use subscription-manager to register. Red Hat Universal Base Image 8 (RPMs) - BaseOS 3.3 MB/s | 797 kB 00:00 Red Hat Universal Base Image 8 (RPMs) - AppStream 7.0 MB/s | 2.6 MB 00:00 Red Hat Universal Base Image 8 (RPMs) - CodeReady Builder 120 kB/s | 16 kB 00:00 No match for argument: gettext-devel Error: Unable to find a match: gettext-devel How can this package be installed?
How to install gettext-devel in redhat/ubi8 I have a docker file where I want to install gettext-devel FROM redhat/ubi8 RUN yum install gettext-devel which does not work: Updating Subscription Management repositories. Unable to read consumer identity This system is not registered with an entitlement server. You can use subscription-manager to register. Red Hat Universal Base Image 8 (RPMs) - BaseOS 3.3 MB/s | 797 kB 00:00 Red Hat Universal Base Image 8 (RPMs) - AppStream 7.0 MB/s | 2.6 MB 00:00 Red Hat Universal Base Image 8 (RPMs) - CodeReady Builder 120 kB/s | 16 kB 00:00 No match for argument: gettext-devel Error: Unable to find a match: gettext-devel How can this package be installed?
docker, redhat, gettext
2
935
1
https://stackoverflow.com/questions/71336409/how-to-install-gettext-devel-in-redhat-ubi8
71,264,649
SSL module in Python3.10.2 is not available, getting error in pip install on redhat 7
sudo make: Following modules built successfully but were removed because they could not be imported: _hashlib _ssl Could not build the ssl module! Python requires a OpenSSL 1.1.1 or newer pip install: WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
SSL module in Python3.10.2 is not available, getting error in pip install on redhat 7 sudo make: Following modules built successfully but were removed because they could not be imported: _hashlib _ssl Could not build the ssl module! Python requires a OpenSSL 1.1.1 or newer pip install: WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
ssl, openssl, redhat, python-3.10
2
637
0
https://stackoverflow.com/questions/71264649/ssl-module-in-python3-10-2-is-not-available-getting-error-in-pip-install-on-red
69,390,109
Installing PHP on a server where HTTPD was installed by source
There is one server where Apache HTTPD was installed by source, so, there is no Yum/RPM register of installed Apache HTTPD. When trying to install PHP 5.x packages using yum, it gives errors saying the dependent package "httpd" was not found. What's the correct approach to install PHP and the remaining PHP RPM children packages (as example php-pecl-jsonc RPM), since yum is not allowing it to be installed?
Installing PHP on a server where HTTPD was installed by source There is one server where Apache HTTPD was installed by source, so, there is no Yum/RPM register of installed Apache HTTPD. When trying to install PHP 5.x packages using yum, it gives errors saying the dependent package "httpd" was not found. What's the correct approach to install PHP and the remaining PHP RPM children packages (as example php-pecl-jsonc RPM), since yum is not allowing it to be installed?
php, apache, redhat, yum
2
515
1
https://stackoverflow.com/questions/69390109/installing-php-on-a-server-where-httpd-was-installed-by-source
69,289,892
Install Rmpfr package in RStudio on remote server
I often use R / RStudio that is located on a remote server. Unfortunately, I do not have root / sudo / administrator access on this server, so any installations need to be done only in ~/. I know that Rmpfr relies on libmpfr4 and mpfr.h. I have mpfr and gmp installed on my local folder (~/), but it seems R / RStudio only knows to look in /usr. I have tried the following command: install.packages("Rmpfr", type = "source", configure.args = c("--with-mpfr-include=~/include", "--with-mpfr-lib=~/lib")) and I got the following error: In file included from Ops.c:12:0: Rmpfr_utils.h:22:18: fatal error: mpfr.h: No such file or directory #include <mpfr.h> ^ compilation terminated. make: *** [Ops.o] Error 1 ERROR: compilation failed for package ‘Rmpfr’ I confirmed that mpfr.h is in the "~/include" folder and libmpfr.a is in the "~/lib" folder. I also have all of the following in my .bash_profile: export PATH="~/mpfr-4.1.0/src/:$PATH" export PATH="~/bin/:$PATH" export PATH="~/gmp-6.2.1/:$PATH" I'm pretty far out of my element with the more technical Linux / command line stuff, but I'm trying to learn and use Google / manuals as much as possible. My server runs a redhat distro. For some reason, apt-get doesn't seem to be a valid command on my server, yum requires root access that I don't have, and I can't figure out how to install homebrew for any brew solutions. Please let me know if I need to provide any more information. Thanks in advance.
Install Rmpfr package in RStudio on remote server I often use R / RStudio that is located on a remote server. Unfortunately, I do not have root / sudo / administrator access on this server, so any installations need to be done only in ~/. I know that Rmpfr relies on libmpfr4 and mpfr.h. I have mpfr and gmp installed on my local folder (~/), but it seems R / RStudio only knows to look in /usr. I have tried the following command: install.packages("Rmpfr", type = "source", configure.args = c("--with-mpfr-include=~/include", "--with-mpfr-lib=~/lib")) and I got the following error: In file included from Ops.c:12:0: Rmpfr_utils.h:22:18: fatal error: mpfr.h: No such file or directory #include <mpfr.h> ^ compilation terminated. make: *** [Ops.o] Error 1 ERROR: compilation failed for package ‘Rmpfr’ I confirmed that mpfr.h is in the "~/include" folder and libmpfr.a is in the "~/lib" folder. I also have all of the following in my .bash_profile: export PATH="~/mpfr-4.1.0/src/:$PATH" export PATH="~/bin/:$PATH" export PATH="~/gmp-6.2.1/:$PATH" I'm pretty far out of my element with the more technical Linux / command line stuff, but I'm trying to learn and use Google / manuals as much as possible. My server runs a redhat distro. For some reason, apt-get doesn't seem to be a valid command on my server, yum requires root access that I don't have, and I can't figure out how to install homebrew for any brew solutions. Please let me know if I need to provide any more information. Thanks in advance.
r, redhat, rstudio-server, mpfr
2
279
0
https://stackoverflow.com/questions/69289892/install-rmpfr-package-in-rstudio-on-remote-server
67,586,680
Transient time service already exists when restarting Gitlab on podman instance
On RHEL 8, I have a problem when restarting Gitlab instance on Podman. Everything works fine but running command: sudo podman restart gitlab-server makes an error: ERRO[0011] Failed to start transient timer unit: Unit 28e595d7d0812cd0e5e772db55d02d137c4179fcd4aa0527162d28b22d169ee3.service already exists. When I list all services, I can see above service with "load failed" status. Error is not making any problems with functionality but it is quite strange what is happening. Thank You for any advice.
Transient time service already exists when restarting Gitlab on podman instance On RHEL 8, I have a problem when restarting Gitlab instance on Podman. Everything works fine but running command: sudo podman restart gitlab-server makes an error: ERRO[0011] Failed to start transient timer unit: Unit 28e595d7d0812cd0e5e772db55d02d137c4179fcd4aa0527162d28b22d169ee3.service already exists. When I list all services, I can see above service with "load failed" status. Error is not making any problems with functionality but it is quite strange what is happening. Thank You for any advice.
gitlab, redhat, podman, rhel8
2
1,452
0
https://stackoverflow.com/questions/67586680/transient-time-service-already-exists-when-restarting-gitlab-on-podman-instance
66,679,592
VS Code freezing after latest update
I am using VS code on Red Hat Linux and it is freezing after opening. The only option I have is to forcefully quit. It was working fine until last week. I was told by the admin that this problem was caused due to a recent update. What can I do?
VS Code freezing after latest update I am using VS code on Red Hat Linux and it is freezing after opening. The only option I have is to forcefully quit. It was working fine until last week. I was told by the admin that this problem was caused due to a recent update. What can I do?
linux, visual-studio-code, redhat
2
801
1
https://stackoverflow.com/questions/66679592/vs-code-freezing-after-latest-update
66,417,872
opm pruning all the version of the catalog from the image
I am pruning opm index with the custom catalog that i require but wheni do that i get all the versions of the image. How do i get only the latest image version and not all\ ./opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.6 \ -p openshift-pipelines-operator-rh \ --generate Result : openshift-pipelines-operator.v1.0.1 openshift-pipelines-operator.v1.1.1 redhat-openshift-pipelines-operator.v1.1.2 redhat-openshift-pipelines-operator.v1.2.0 redhat-openshift-pipelines-operator.v1.2.1 redhat-openshift-pipelines-operator.v1.2.2 redhat-openshift-pipelines-operator.v1.2.3 but what i want is the latest version from the result that is openshift-pipelines-operator.v1.1.1 redhat-openshift-pipelines-operator.v1.2.3
opm pruning all the version of the catalog from the image I am pruning opm index with the custom catalog that i require but wheni do that i get all the versions of the image. How do i get only the latest image version and not all\ ./opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.6 \ -p openshift-pipelines-operator-rh \ --generate Result : openshift-pipelines-operator.v1.0.1 openshift-pipelines-operator.v1.1.1 redhat-openshift-pipelines-operator.v1.1.2 redhat-openshift-pipelines-operator.v1.2.0 redhat-openshift-pipelines-operator.v1.2.1 redhat-openshift-pipelines-operator.v1.2.2 redhat-openshift-pipelines-operator.v1.2.3 but what i want is the latest version from the result that is openshift-pipelines-operator.v1.1.1 redhat-openshift-pipelines-operator.v1.2.3
openshift, redhat
2
222
0
https://stackoverflow.com/questions/66417872/opm-pruning-all-the-version-of-the-catalog-from-the-image
64,047,348
Issue with C++ macros on Red Hat Enterprise Linux (RHEL) using CPPCHECK
At my employment we are working on a large C++ project on Red Hat Enterprise Linux (RHEL) 6, soon to be RHEL 8. with Bash shell. We sometimes use Netbeans for editing source code, but I prefer to use vim. We are doing DevOps and Agile with two week sprints, and using Jenkins build engine with AccuRev for source control. Every time a code change is promoted in AccuRev, Jenkins automatically starts a new build of the code base. As part of that build, CPPCHECK is used to do static code analysis on the C++ source code. In part of our system, we are using C++ macros to define unit test scripts. the macros are not fully defined, since we are allowing the unit test script developer to customize them for doing unit tests. This system works fine with no error at compile time with g++ compiler, and also there is no error at run time either. However, when Jenkins does a build, and it uses CPPCHECK to analyze the code, it is generating error-id: unknownMacro text: There is an unknown macro here somewhere. Configuration is required. If SCRIPT is a macro then please configure it. Here is an example of the C++ code we are using to complete partially defined C++ macro: SCRIPT(SampleScript) BODY() { cout << "SampleScript running." << endl; } END_SCRIPT() SCRIPT, BODY, and END_SCRIPT are C++ macros listed in an include file, but are not completely defined. On the Github site for CPPCHECK there is a supposed solution to this issue by using -I option, but I tried that and the missing macro CPPCHECK errors are still occurring. This is the CPPCHECK command listed with its arguments, including the -I option, but so far this command is still generating "unknownMacro" error. cppcheck \ -I ./* \ -j 4 \ --xml-version=2 \
Issue with C++ macros on Red Hat Enterprise Linux (RHEL) using CPPCHECK At my employment we are working on a large C++ project on Red Hat Enterprise Linux (RHEL) 6, soon to be RHEL 8. with Bash shell. We sometimes use Netbeans for editing source code, but I prefer to use vim. We are doing DevOps and Agile with two week sprints, and using Jenkins build engine with AccuRev for source control. Every time a code change is promoted in AccuRev, Jenkins automatically starts a new build of the code base. As part of that build, CPPCHECK is used to do static code analysis on the C++ source code. In part of our system, we are using C++ macros to define unit test scripts. the macros are not fully defined, since we are allowing the unit test script developer to customize them for doing unit tests. This system works fine with no error at compile time with g++ compiler, and also there is no error at run time either. However, when Jenkins does a build, and it uses CPPCHECK to analyze the code, it is generating error-id: unknownMacro text: There is an unknown macro here somewhere. Configuration is required. If SCRIPT is a macro then please configure it. Here is an example of the C++ code we are using to complete partially defined C++ macro: SCRIPT(SampleScript) BODY() { cout << "SampleScript running." << endl; } END_SCRIPT() SCRIPT, BODY, and END_SCRIPT are C++ macros listed in an include file, but are not completely defined. On the Github site for CPPCHECK there is a supposed solution to this issue by using -I option, but I tried that and the missing macro CPPCHECK errors are still occurring. This is the CPPCHECK command listed with its arguments, including the -I option, but so far this command is still generating "unknownMacro" error. cppcheck \ -I ./* \ -j 4 \ --xml-version=2 \
c++, linux, jenkins, redhat, cppcheck
2
1,373
1
https://stackoverflow.com/questions/64047348/issue-with-c-macros-on-red-hat-enterprise-linux-rhel-using-cppcheck
63,122,810
pyexpat.cpython-35m-x86_64-linux-gnu.so: undefined symbol: XML_SetHashSalt
I am seeing same following error when I try to run the following: pip virtualenv Looks like Python3.5 is installed using softwarecollections. I dont have root access and using service account. bash-4.2$ scl --list rh-python35 bash-4.2$ scl --list rh-python35 rh-python35-python-setuptools-18.0.1-2.el7.noarch rh-python35-runtime-2.0-2.el7.x86_64 rh-python35-python-libs-3.5.1-11.el7.x86_64 rh-python35-python-devel-3.5.1-11.el7.x86_64 rh-python35-python-pip-7.1.0-2.el7.noarch rh-python35-python-virtualenv-13.1.2-2.el7.noarch rh-python35-2.0-2.el7.x86_64 rh-python35-python-3.5.1-11.el7.x86_64 rh-python35-python-sqlalchemy-1.0.11-1.el7.x86_64 bash-4.2$ ls easy_install pip3 pydoc3 python3 python3.5m python3-config pyvenv-3.5 easy_install-3.5 pip3.5 pydoc3.5 python3.5 python3.5m-config python-config virtualenv pip pydoc python python3.5-config python3.5m-x86_64-config pyvenv virtualenv-3.5 bash-4.2$ which python /opt/rh/rh-python35/root/usr/bin/python bash-4.2$ source scl_source enable rh-python35 --> runs fine **bash-4.2$ pip install --user pipenv** Traceback (most recent call last): File "/opt/rh/rh-python35/root/usr/bin/pip", line 7, in <module> from pip import main File "/opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/pip/__init__.py", line 12, in <module> from pip.utils import get_installed_distributions, get_prog File "/opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/pip/utils/__init__.py", line 23, in <module> from pip._vendor import pkg_resources File "/opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/pip/_vendor/pkg_resources/__init__.py", line 36, in <module> import plistlib File "/opt/rh/rh-python35/root/usr/lib64/python3.5/plistlib.py", line 65, in <module> from xml.parsers.expat import ParserCreate File "/opt/rh/rh-python35/root/usr/lib64/python3.5/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: /opt/rh/rh-python35/root/usr/lib64/python3.5/lib-dynload/pyexpat.cpython-35m-x86_64-linux-gnu.so: undefined symbol: XML_SetHashSalt bash-4.2$ virtualenv Traceback (most recent call last): File "/opt/rh/rh-python35/root/usr/bin/virtualenv", line 5, in <module> from pkg_resources import load_entry_point File "/opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/pkg_resources/__init__.py", line 36, in <module> import plistlib File "/opt/rh/rh-python35/root/usr/lib64/python3.5/plistlib.py", line 65, in <module> from xml.parsers.expat import ParserCreate File "/opt/rh/rh-python35/root/usr/lib64/python3.5/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: /opt/rh/rh-python35/root/usr/lib64/python3.5/lib-dynload/pyexpat.cpython-35m-x86_64-linux-gnu.so: undefined symbol: XML_SetHashSalt bash-4.2$ ldd /opt/rh/rh-python35/root/usr/lib64/python3.5/lib-dynload/pyexpat.cpython-35m-x86_64-linux-gnu.so linux-vdso.so.1 => (0x00007ffebbd1f000) libexpat.so.1 => /opt/ORACLE/product/lib/libexpat.so.1 (0x00007f0b19142000) libpython3.5m.so.rh-python35-1.0 => /opt/rh/rh-python35/root/usr/lib64/libpython3.5m.so.rh-python35-1.0 (0x00007f0b18c73000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f0b18a57000) libc.so.6 => /lib64/libc.so.6 (0x00007f0b1868a000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f0b18486000) libutil.so.1 => /lib64/libutil.so.1 (0x00007f0b18283000) libm.so.6 => /lib64/libm.so.6 (0x00007f0b17f81000) /lib64/ld-linux-x86-64.so.2 (0x00007f0b19573000) I appreciate everyone's feedback Thanks
pyexpat.cpython-35m-x86_64-linux-gnu.so: undefined symbol: XML_SetHashSalt I am seeing same following error when I try to run the following: pip virtualenv Looks like Python3.5 is installed using softwarecollections. I dont have root access and using service account. bash-4.2$ scl --list rh-python35 bash-4.2$ scl --list rh-python35 rh-python35-python-setuptools-18.0.1-2.el7.noarch rh-python35-runtime-2.0-2.el7.x86_64 rh-python35-python-libs-3.5.1-11.el7.x86_64 rh-python35-python-devel-3.5.1-11.el7.x86_64 rh-python35-python-pip-7.1.0-2.el7.noarch rh-python35-python-virtualenv-13.1.2-2.el7.noarch rh-python35-2.0-2.el7.x86_64 rh-python35-python-3.5.1-11.el7.x86_64 rh-python35-python-sqlalchemy-1.0.11-1.el7.x86_64 bash-4.2$ ls easy_install pip3 pydoc3 python3 python3.5m python3-config pyvenv-3.5 easy_install-3.5 pip3.5 pydoc3.5 python3.5 python3.5m-config python-config virtualenv pip pydoc python python3.5-config python3.5m-x86_64-config pyvenv virtualenv-3.5 bash-4.2$ which python /opt/rh/rh-python35/root/usr/bin/python bash-4.2$ source scl_source enable rh-python35 --> runs fine **bash-4.2$ pip install --user pipenv** Traceback (most recent call last): File "/opt/rh/rh-python35/root/usr/bin/pip", line 7, in <module> from pip import main File "/opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/pip/__init__.py", line 12, in <module> from pip.utils import get_installed_distributions, get_prog File "/opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/pip/utils/__init__.py", line 23, in <module> from pip._vendor import pkg_resources File "/opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/pip/_vendor/pkg_resources/__init__.py", line 36, in <module> import plistlib File "/opt/rh/rh-python35/root/usr/lib64/python3.5/plistlib.py", line 65, in <module> from xml.parsers.expat import ParserCreate File "/opt/rh/rh-python35/root/usr/lib64/python3.5/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: /opt/rh/rh-python35/root/usr/lib64/python3.5/lib-dynload/pyexpat.cpython-35m-x86_64-linux-gnu.so: undefined symbol: XML_SetHashSalt bash-4.2$ virtualenv Traceback (most recent call last): File "/opt/rh/rh-python35/root/usr/bin/virtualenv", line 5, in <module> from pkg_resources import load_entry_point File "/opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/pkg_resources/__init__.py", line 36, in <module> import plistlib File "/opt/rh/rh-python35/root/usr/lib64/python3.5/plistlib.py", line 65, in <module> from xml.parsers.expat import ParserCreate File "/opt/rh/rh-python35/root/usr/lib64/python3.5/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: /opt/rh/rh-python35/root/usr/lib64/python3.5/lib-dynload/pyexpat.cpython-35m-x86_64-linux-gnu.so: undefined symbol: XML_SetHashSalt bash-4.2$ ldd /opt/rh/rh-python35/root/usr/lib64/python3.5/lib-dynload/pyexpat.cpython-35m-x86_64-linux-gnu.so linux-vdso.so.1 => (0x00007ffebbd1f000) libexpat.so.1 => /opt/ORACLE/product/lib/libexpat.so.1 (0x00007f0b19142000) libpython3.5m.so.rh-python35-1.0 => /opt/rh/rh-python35/root/usr/lib64/libpython3.5m.so.rh-python35-1.0 (0x00007f0b18c73000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f0b18a57000) libc.so.6 => /lib64/libc.so.6 (0x00007f0b1868a000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f0b18486000) libutil.so.1 => /lib64/libutil.so.1 (0x00007f0b18283000) libm.so.6 => /lib64/libm.so.6 (0x00007f0b17f81000) /lib64/ld-linux-x86-64.so.2 (0x00007f0b19573000) I appreciate everyone's feedback Thanks
python, linux, redhat, software-collections
2
3,281
1
https://stackoverflow.com/questions/63122810/pyexpat-cpython-35m-x86-64-linux-gnu-so-undefined-symbol-xml-sethashsalt
62,600,948
PXE boot fail with kernel panic: Unable to mount root fs
I am having an issue on some servers since some time, and fail to find the issue. These are x86_64 server, with Intel Xeon, configured to boot in UEFI over network, through an iPXE rom. Kernel and initramfs are the ones from Centos 8 (tried 8.0 and 8.2). But when booting, I always end up with (on every servers, so should not be related to an hardware failure): [ 5.542304] hid-generic 0003:0557:2221.0002: input,hidraw1: USB HID v1.00 Keyboard [Winbond Electronics Corp Hermon USB hidmouse Device] on usb-0000:00:1a.0-1.3/input1 [ 5.599611] rtc_cmos 00:02: setting system clock to 2020-06-26 19:21:43 UTC (1593199303) [ 5.620965] md: Waiting for all devices to be available before autodetect [ 5.640580] md: If you don't use raid, use raid=noautodetect [ 5.659949] md: Autodetecting RAID arrays. [ 5.676869] md: autorun ... [ 5.691838] md: ... autorun DONE. [ 5.707883] List of all partitions: [ 5.723667] No filesystem could mount root, tried: [ 5.723667] [ 5.754724] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 5.775237] CPU: 9 PID: 1 Comm: swapper/0 Not tainted 4.18.0-193.6.3.el8_2.x86_64 #1 [ 5.815778] Call Trace: [ 5.830360] dump_stack+0x5c/0x80 [ 5.845920] panic+0xe7/0x2a9 [ 5.860995] mount_block_root+0x2c5/0x2e9 [ 5.877407] ? do_early_param+0x91/0x91 [ 5.892617] prepare_namespace+0x135/0x16b [ 5.907676] kernel_init_freeable+0x22e/0x258 [ 5.922607] ? rest_init+0xaa/0xaa [ 5.937398] kernel_init+0xa/0xff [ 5.950991] ret_from_fork+0x35/0x40 [ 5.964572] Kernel Offset: 0x2ca00000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff) [ 6.000387] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]--- The iPXE boot script is: #!ipxe kernel [URL] initrd=initramfs-4.18.0-193.6.3.el8_2.x86_64.img selinux=0 rd.shell rd.debug root=live:[URL] rw console=tty0 console=ttyS1,115200 initrd [URL] boot Which generate this in the console: [URL] ok [URL] ok INTEL 0x6f080f70 MAC reset (081c0261/80280783 was 081c0261/80280783) INTEL 0x6f080f70 MAC reset (081c0261/80280783 was 081c0261/80280783) INTEL 0x6f081ab0 MAC reset (081c0261/80280787 was 081c0261/80280787) [ 0.000000] Linux version 4.18.0-193.6.3.el8_2.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version 8.3.1 20191121 (Red Hat 8.3.1-5) (GCC)) #1 SMP Wed Jun 10 11:09:32 UTC 2020 [ 0.000000] Command line: vmlinuz-4.18.0-193.6.3.el8_2.x86_64 initrd=initramfs-4.18.0-193.6.3.el8_2.x86_64.img selinux=0 rd.shell rd.debug root=live:[URL] rw console=tty0 console=ttyS1,115200 ... On the server side, on apache logs I have: 10.10.2.1 - - [26/Jun/2020:19:55:42 +0200] "GET /vmlinuz-4.18.0-193.6.3.el8_2.x86_64 HTTP/1.1" 200 8913656 "-" "iPXE/1.0.0+" 10.10.2.1 - - [26/Jun/2020:19:55:42 +0200] "GET /initramfs-4.18.0-193.6.3.el8_2.x86_64.img HTTP/1.1" 200 53703611 "-" "iPXE/1.0.0+" So it seems to be working perfectly. This is diskless boot here, but whatever I try (kickstart diskfull install, or even kernel+initrd alone) I always end up to this kernel panic... I tried to reset BIOS settings, try to boot in legacy/pcbios instead of UEFI, tried to desactivate sata disks, etc. Always this same error. Also tried to use kernel+initrd from Centos ISO (checked checksum), tried to use the ones from my management node. Nothing. Do I miss something obvious? Does any of you have an idea or already faced this kind of issue? Many thanks in advance :-) With my best regards Beuk
PXE boot fail with kernel panic: Unable to mount root fs I am having an issue on some servers since some time, and fail to find the issue. These are x86_64 server, with Intel Xeon, configured to boot in UEFI over network, through an iPXE rom. Kernel and initramfs are the ones from Centos 8 (tried 8.0 and 8.2). But when booting, I always end up with (on every servers, so should not be related to an hardware failure): [ 5.542304] hid-generic 0003:0557:2221.0002: input,hidraw1: USB HID v1.00 Keyboard [Winbond Electronics Corp Hermon USB hidmouse Device] on usb-0000:00:1a.0-1.3/input1 [ 5.599611] rtc_cmos 00:02: setting system clock to 2020-06-26 19:21:43 UTC (1593199303) [ 5.620965] md: Waiting for all devices to be available before autodetect [ 5.640580] md: If you don't use raid, use raid=noautodetect [ 5.659949] md: Autodetecting RAID arrays. [ 5.676869] md: autorun ... [ 5.691838] md: ... autorun DONE. [ 5.707883] List of all partitions: [ 5.723667] No filesystem could mount root, tried: [ 5.723667] [ 5.754724] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 5.775237] CPU: 9 PID: 1 Comm: swapper/0 Not tainted 4.18.0-193.6.3.el8_2.x86_64 #1 [ 5.815778] Call Trace: [ 5.830360] dump_stack+0x5c/0x80 [ 5.845920] panic+0xe7/0x2a9 [ 5.860995] mount_block_root+0x2c5/0x2e9 [ 5.877407] ? do_early_param+0x91/0x91 [ 5.892617] prepare_namespace+0x135/0x16b [ 5.907676] kernel_init_freeable+0x22e/0x258 [ 5.922607] ? rest_init+0xaa/0xaa [ 5.937398] kernel_init+0xa/0xff [ 5.950991] ret_from_fork+0x35/0x40 [ 5.964572] Kernel Offset: 0x2ca00000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff) [ 6.000387] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]--- The iPXE boot script is: #!ipxe kernel [URL] initrd=initramfs-4.18.0-193.6.3.el8_2.x86_64.img selinux=0 rd.shell rd.debug root=live:[URL] rw console=tty0 console=ttyS1,115200 initrd [URL] boot Which generate this in the console: [URL] ok [URL] ok INTEL 0x6f080f70 MAC reset (081c0261/80280783 was 081c0261/80280783) INTEL 0x6f080f70 MAC reset (081c0261/80280783 was 081c0261/80280783) INTEL 0x6f081ab0 MAC reset (081c0261/80280787 was 081c0261/80280787) [ 0.000000] Linux version 4.18.0-193.6.3.el8_2.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version 8.3.1 20191121 (Red Hat 8.3.1-5) (GCC)) #1 SMP Wed Jun 10 11:09:32 UTC 2020 [ 0.000000] Command line: vmlinuz-4.18.0-193.6.3.el8_2.x86_64 initrd=initramfs-4.18.0-193.6.3.el8_2.x86_64.img selinux=0 rd.shell rd.debug root=live:[URL] rw console=tty0 console=ttyS1,115200 ... On the server side, on apache logs I have: 10.10.2.1 - - [26/Jun/2020:19:55:42 +0200] "GET /vmlinuz-4.18.0-193.6.3.el8_2.x86_64 HTTP/1.1" 200 8913656 "-" "iPXE/1.0.0+" 10.10.2.1 - - [26/Jun/2020:19:55:42 +0200] "GET /initramfs-4.18.0-193.6.3.el8_2.x86_64.img HTTP/1.1" 200 53703611 "-" "iPXE/1.0.0+" So it seems to be working perfectly. This is diskless boot here, but whatever I try (kickstart diskfull install, or even kernel+initrd alone) I always end up to this kernel panic... I tried to reset BIOS settings, try to boot in legacy/pcbios instead of UEFI, tried to desactivate sata disks, etc. Always this same error. Also tried to use kernel+initrd from Centos ISO (checked checksum), tried to use the ones from my management node. Nothing. Do I miss something obvious? Does any of you have an idea or already faced this kind of issue? Many thanks in advance :-) With my best regards Beuk
linux-kernel, centos, kernel, redhat, ipxe
2
7,442
0
https://stackoverflow.com/questions/62600948/pxe-boot-fail-with-kernel-panic-unable-to-mount-root-fs
62,065,136
Hashicorp Vault 307 redirect
I have already set up a Docker container running one instance but I don't understand what is different with my new installation and why I cannot connect. I currently get a 307 redirect. Here is my config file: { "listener": [{ "tcp": { "address": "0.0.0.0:8200", "tls_cert_file": "/vault/config/certs/vault_crt.pem", "tls_key_file": "/vault/config/certs/vault_key.pem", "tls_disable": 0 } }], "storage": { "file": { "path": "/vault/data" } }, "disable_mlock": true, "ui": true } I originally set the container up with tls disabled and ran operator init. I then enabled tls and restarted it. I've gone back into the container and set the VAULT_ADDR= [URL] . The dns_name resolves to the servers IP on which the docker container is running. When I run vault status I get a timeout. I assume because when I try from outside the container I get a redirect. * About to connect() to <dns-name> port 8200 (#0) * Trying 172.31.168.219... * Connected to <dns-name> (172.31.168.219) port 8200 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * skipping SSL peer certificate verification * NSS: client certificate not found: ./vault_crt.pem * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * Server certificate: * subject: CN=<dns-name>,OU=Department,O=Company,L=City,ST=County,C=GB * start date: May 26 10:19:44 2020 GMT * expire date: May 25 10:19:44 2025 GMT * common name: <dns-name> * issuer: CN=Company Issuing CA2,OU=IT,O=Company,C=GB > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: <dns-name>:8200 > Accept: */* > < HTTP/1.1 307 Temporary Redirect HTTP/1.1 307 Temporary Redirect < Cache-Control: no-store Cache-Control: no-store < Content-Type: text/html; charset=utf-8 Content-Type: text/html; charset=utf-8 < Location: /ui/ Location: /ui/ < Date: Thu, 28 May 2020 12:42:18 GMT Date: Thu, 28 May 2020 12:42:18 GMT < Content-Length: 40 Content-Length: 40 < <a href="/ui/">Temporary Redirect</a>. * Connection #0 to host <dns-name> left intact Is this an SSL issue?
Hashicorp Vault 307 redirect I have already set up a Docker container running one instance but I don't understand what is different with my new installation and why I cannot connect. I currently get a 307 redirect. Here is my config file: { "listener": [{ "tcp": { "address": "0.0.0.0:8200", "tls_cert_file": "/vault/config/certs/vault_crt.pem", "tls_key_file": "/vault/config/certs/vault_key.pem", "tls_disable": 0 } }], "storage": { "file": { "path": "/vault/data" } }, "disable_mlock": true, "ui": true } I originally set the container up with tls disabled and ran operator init. I then enabled tls and restarted it. I've gone back into the container and set the VAULT_ADDR= [URL] . The dns_name resolves to the servers IP on which the docker container is running. When I run vault status I get a timeout. I assume because when I try from outside the container I get a redirect. * About to connect() to <dns-name> port 8200 (#0) * Trying 172.31.168.219... * Connected to <dns-name> (172.31.168.219) port 8200 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * skipping SSL peer certificate verification * NSS: client certificate not found: ./vault_crt.pem * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * Server certificate: * subject: CN=<dns-name>,OU=Department,O=Company,L=City,ST=County,C=GB * start date: May 26 10:19:44 2020 GMT * expire date: May 25 10:19:44 2025 GMT * common name: <dns-name> * issuer: CN=Company Issuing CA2,OU=IT,O=Company,C=GB > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: <dns-name>:8200 > Accept: */* > < HTTP/1.1 307 Temporary Redirect HTTP/1.1 307 Temporary Redirect < Cache-Control: no-store Cache-Control: no-store < Content-Type: text/html; charset=utf-8 Content-Type: text/html; charset=utf-8 < Location: /ui/ Location: /ui/ < Date: Thu, 28 May 2020 12:42:18 GMT Date: Thu, 28 May 2020 12:42:18 GMT < Content-Length: 40 Content-Length: 40 < <a href="/ui/">Temporary Redirect</a>. * Connection #0 to host <dns-name> left intact Is this an SSL issue?
docker, ssl, redhat, hashicorp-vault
2
2,027
0
https://stackoverflow.com/questions/62065136/hashicorp-vault-307-redirect
60,936,459
How to create a dynamic storage class in OCP 3.11?
When I try to run a script for OCP 3.11 I get this error At least one dynamic storage class must be available in order to proceed. How can I create this dynamic storage class?
How to create a dynamic storage class in OCP 3.11? When I try to run a script for OCP 3.11 I get this error At least one dynamic storage class must be available in order to proceed. How can I create this dynamic storage class?
kubernetes, openshift, redhat
2
152
0
https://stackoverflow.com/questions/60936459/how-to-create-a-dynamic-storage-class-in-ocp-3-11
60,892,598
problem installing mysqlclient in rhel python3.6
I'm trying to install apache-airflow[mysql]. Its failing when trying to install the mysqlclient dependency. I'm using rhel7. I have the python-devel and mysql-devel packages installed. I first tried installing using rh-python36. On reading some issues that it could be with the python environment I compiled another version from source. I also reinstalled mysql. gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Dversion_info=(1,4,6,'final',0) -D__version__=1.4.6 -I/opt/rh/rh-mysql80/root/usr/include/mysql -I/u01/airflow-build-1.0/venv/include -I/usr/local/include/python3.7m -c MySQLdb/_mysql.c -o build/temp.linux-x86_64-3.7/MySQLdb/_mysql.o -m64 gcc -pthread -shared build/temp.linux-x86_64-3.7/MySQLdb/_mysql.o -L/opt/rh/rh-mysql80/root/usr/lib64/mysql -lmysqlclient -lpthread -lz -lm -lrt -lssl -lcrypto -ldl -o build/lib.linux-x86_64-3.7/MySQLdb/_mysql.cpython-37m-x86_64-linux-gnu.so /usr/bin/ld: cannot find -lmysqlclient collect2: error: ld returned 1 exit status error: command 'gcc' failed with exit status 1 ----------------------------------------
problem installing mysqlclient in rhel python3.6 I'm trying to install apache-airflow[mysql]. Its failing when trying to install the mysqlclient dependency. I'm using rhel7. I have the python-devel and mysql-devel packages installed. I first tried installing using rh-python36. On reading some issues that it could be with the python environment I compiled another version from source. I also reinstalled mysql. gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Dversion_info=(1,4,6,'final',0) -D__version__=1.4.6 -I/opt/rh/rh-mysql80/root/usr/include/mysql -I/u01/airflow-build-1.0/venv/include -I/usr/local/include/python3.7m -c MySQLdb/_mysql.c -o build/temp.linux-x86_64-3.7/MySQLdb/_mysql.o -m64 gcc -pthread -shared build/temp.linux-x86_64-3.7/MySQLdb/_mysql.o -L/opt/rh/rh-mysql80/root/usr/lib64/mysql -lmysqlclient -lpthread -lz -lm -lrt -lssl -lcrypto -ldl -o build/lib.linux-x86_64-3.7/MySQLdb/_mysql.cpython-37m-x86_64-linux-gnu.so /usr/bin/ld: cannot find -lmysqlclient collect2: error: ld returned 1 exit status error: command 'gcc' failed with exit status 1 ----------------------------------------
python, airflow, redhat, mysql-python
2
145
0
https://stackoverflow.com/questions/60892598/problem-installing-mysqlclient-in-rhel-python3-6
60,702,049
Keycloack and Azure Active Directory with Spring boot
What I've done till now: I have installed Keycloack(8.0.1) and configured it, created realm, clients, and users. Configured couple of simple Spring Boot apps with Keycloack and it is working with SSO. I am trying to achieve following. Keycloak should connect to Azure Active Directory and read the users from there (User Federation) and authenticate, authorise users to use the application. Created Active Directory B2C on Azure cloud. I have gone through too many links and read through Keycloak official documentation but could not figured the way out. Thanks in advance.
Keycloack and Azure Active Directory with Spring boot What I've done till now: I have installed Keycloack(8.0.1) and configured it, created realm, clients, and users. Configured couple of simple Spring Boot apps with Keycloack and it is working with SSO. I am trying to achieve following. Keycloak should connect to Azure Active Directory and read the users from there (User Federation) and authenticate, authorise users to use the application. Created Active Directory B2C on Azure cloud. I have gone through too many links and read through Keycloak official documentation but could not figured the way out. Thanks in advance.
java, azure, redhat, keycloak
2
325
0
https://stackoverflow.com/questions/60702049/keycloack-and-azure-active-directory-with-spring-boot
58,853,642
GCC 8.3.1-3 on Red Hat fails on template class with static function using template parameter
When working with the devtoolset-8 on RedHat 7: [eftlab@49af022e5a7c git]$ which g++ /opt/rh/devtoolset-8/root/usr/bin/g++ [eftlab@49af022e5a7c git]$ /opt/rh/devtoolset-8/root/usr/bin/g++ --version g++ (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3) Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [eftlab@49af022e5a7c git]$ I've come accross a bug i'd like verified please: #include <iostream> #include <functional> template<class ParentT> class OpenSession { public: std::function<void()> Run() { auto parent = parent_; return [parent](std::function<void()> onSuccess, std::function<void()> onReject) mutable { std::function<void()> toRun = [](){}; RunCallback(parent, toRun); }; } void RunCallback(std::function<void()>& toRun) {} private: ParentT* parent_ = nullptr; static void RunCallback(ParentT* parent, std::function<void()>& toRun) {} }; int main(int, const char**) { return 0; } Compile this with g++ -std=c++17 test.cpp and I get the following error: test.cpp: In lambda function: test.cpp:13:32: error: 'this' was not captured for this lambda function RunCallback(parent, toRun); ^ It looks to me like in a template class when a lambda capture is checking its arguments it doesn't know that there are references to the template functions in here which leaves me renaming my method as even changing to OpenSession<ParentT>::RunCallback(parent, toRun); the compiler still doesn't see the static function i'm really referring to. Has anybody got any wonderful compiler flags that might tweak this or any other suggestions? Thanks
GCC 8.3.1-3 on Red Hat fails on template class with static function using template parameter When working with the devtoolset-8 on RedHat 7: [eftlab@49af022e5a7c git]$ which g++ /opt/rh/devtoolset-8/root/usr/bin/g++ [eftlab@49af022e5a7c git]$ /opt/rh/devtoolset-8/root/usr/bin/g++ --version g++ (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3) Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [eftlab@49af022e5a7c git]$ I've come accross a bug i'd like verified please: #include <iostream> #include <functional> template<class ParentT> class OpenSession { public: std::function<void()> Run() { auto parent = parent_; return [parent](std::function<void()> onSuccess, std::function<void()> onReject) mutable { std::function<void()> toRun = [](){}; RunCallback(parent, toRun); }; } void RunCallback(std::function<void()>& toRun) {} private: ParentT* parent_ = nullptr; static void RunCallback(ParentT* parent, std::function<void()>& toRun) {} }; int main(int, const char**) { return 0; } Compile this with g++ -std=c++17 test.cpp and I get the following error: test.cpp: In lambda function: test.cpp:13:32: error: 'this' was not captured for this lambda function RunCallback(parent, toRun); ^ It looks to me like in a template class when a lambda capture is checking its arguments it doesn't know that there are references to the template functions in here which leaves me renaming my method as even changing to OpenSession<ParentT>::RunCallback(parent, toRun); the compiler still doesn't see the static function i'm really referring to. Has anybody got any wonderful compiler flags that might tweak this or any other suggestions? Thanks
gcc, g++, redhat
2
86
0
https://stackoverflow.com/questions/58853642/gcc-8-3-1-3-on-red-hat-fails-on-template-class-with-static-function-using-templa
57,875,101
GLIBCXX_3.4.17 and GLIBC_2.16 not found on redhat 6.7
I have an account (not superuser) on Redhat 6.7 and it has Python 3.4.3 on it. The computer is offline. I installed TensorFlow 1.12 on it. When I want to import TensorFlow, I get the following error: Traceback (most recent call last): File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/appl/open_tools/python/3.4.3/lib/python3.4/imp.py", line 243, in load_module return load_dynamic(name, filename, file) ImportError: /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.17' not found (required by /home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/__init__.py", line 24, in <module> from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/__init__.py", line 49, in <module> from tensorflow.python import pywrap_tensorflow File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module> raise ImportError(msg) ImportError: Traceback (most recent call last): File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/appl/open_tools/python/3.4.3/lib/python3.4/imp.py", line 243, in load_module return load_dynamic(name, filename, file) ImportError: /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.17' not found (required by /home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so) Failed to load the native TensorFlow runtime. See [URL] I realized it might be because of gcc. So, I looked at the gcc versions available: gcc/4.8.1 gcc/4.9.0 gcc/5.2.0 gcc/6.1.0 gcc/4.8.5_rhel6 gcc/4.9.3 gcc/5.3.0 gcc/7.3.0 I loaded gcc/4.8.1. Now, I get a different error: Traceback (most recent call last): File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/appl/open_tools/python/3.4.3/lib/python3.4/imp.py", line 243, in load_module return load_dynamic(name, filename, file) ImportError: /lib64/libc.so.6: version GLIBC_2.16' not found (required by /home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/__init__.py", line 24, in <module> from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/__init__.py", line 49, in <module> from tensorflow.python import pywrap_tensorflow File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module> raise ImportError(msg) ImportError: Traceback (most recent call last): File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/appl/open_tools/python/3.4.3/lib/python3.4/imp.py", line 243, in load_module return load_dynamic(name, filename, file) ImportError: /lib64/libc.so.6: version GLIBC_2.16' not found (required by /home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so) Failed to load the native TensorFlow runtime. See [URL] How can I resolve this? One solution is that the system admin upgrades the RedHat from 6.7 to 7. But, they don't do that. Do I have to try an earlier version of TensorFlow? Or, an earlier version of python? (They have python 2, but I prefer not to use Python 2) What I tried after I got comments: I downloaded a new libc (libc6-2.16.90-3.x86_64.rpm) from [URL] and then used following commands to set the library directory and preload to the load the new libc from my user's home directory: export LD_LIBRARY_PATH=path_to_libc_so_6_dir:${LD_LIBRARY_PATH}. export LD_PRELOAD=libc.so.6 Then, I ran python, but I got the same error.
GLIBCXX_3.4.17 and GLIBC_2.16 not found on redhat 6.7 I have an account (not superuser) on Redhat 6.7 and it has Python 3.4.3 on it. The computer is offline. I installed TensorFlow 1.12 on it. When I want to import TensorFlow, I get the following error: Traceback (most recent call last): File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/appl/open_tools/python/3.4.3/lib/python3.4/imp.py", line 243, in load_module return load_dynamic(name, filename, file) ImportError: /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.17' not found (required by /home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/__init__.py", line 24, in <module> from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/__init__.py", line 49, in <module> from tensorflow.python import pywrap_tensorflow File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module> raise ImportError(msg) ImportError: Traceback (most recent call last): File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/appl/open_tools/python/3.4.3/lib/python3.4/imp.py", line 243, in load_module return load_dynamic(name, filename, file) ImportError: /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.17' not found (required by /home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so) Failed to load the native TensorFlow runtime. See [URL] I realized it might be because of gcc. So, I looked at the gcc versions available: gcc/4.8.1 gcc/4.9.0 gcc/5.2.0 gcc/6.1.0 gcc/4.8.5_rhel6 gcc/4.9.3 gcc/5.3.0 gcc/7.3.0 I loaded gcc/4.8.1. Now, I get a different error: Traceback (most recent call last): File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/appl/open_tools/python/3.4.3/lib/python3.4/imp.py", line 243, in load_module return load_dynamic(name, filename, file) ImportError: /lib64/libc.so.6: version GLIBC_2.16' not found (required by /home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/__init__.py", line 24, in <module> from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/__init__.py", line 49, in <module> from tensorflow.python import pywrap_tensorflow File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module> raise ImportError(msg) ImportError: Traceback (most recent call last): File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "/home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/appl/open_tools/python/3.4.3/lib/python3.4/imp.py", line 243, in load_module return load_dynamic(name, filename, file) ImportError: /lib64/libc.so.6: version GLIBC_2.16' not found (required by /home/r.jack/.local/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so) Failed to load the native TensorFlow runtime. See [URL] How can I resolve this? One solution is that the system admin upgrades the RedHat from 6.7 to 7. But, they don't do that. Do I have to try an earlier version of TensorFlow? Or, an earlier version of python? (They have python 2, but I prefer not to use Python 2) What I tried after I got comments: I downloaded a new libc (libc6-2.16.90-3.x86_64.rpm) from [URL] and then used following commands to set the library directory and preload to the load the new libc from my user's home directory: export LD_LIBRARY_PATH=path_to_libc_so_6_dir:${LD_LIBRARY_PATH}. export LD_PRELOAD=libc.so.6 Then, I ran python, but I got the same error.
python, linux, tensorflow, redhat
2
192
0
https://stackoverflow.com/questions/57875101/glibcxx-3-4-17-and-glibc-2-16-not-found-on-redhat-6-7
56,466,629
How do I switch user with su command in Unix box programatically with Jsch libary?
I have a very basic Java program that is using Jsch library to automate execution of some shell scripts located in my Unix box. In order to automate execution of these shell scripts following steps need to be followed: Login into Unix box with user john Switch to another user simba Provide credentials of this new user using OutputStream bypassing the input prompt at cli and flush it using os.flush() Check that the user switch happened using whoami command I wrote following program to achieve this functionality but as you can see in the console output the user switch did NOT happen and the second whoami command gives me the same username (ie: john ) Another question that comes to my mind is what should I do for commands that do not have any output from cli such as su - , logout , bash etc. It doesn't make sense to wait for the output in that infinite while(true) loop when nothing is going to be returned. Please guide. Current Console Output Connecting SSH to my-unix-box.net - Please wait for few seconds... Connected! Executing command: whoami john Executing command: su - simba Setting suPasswd now.... Executing command: whoami john //WRONG: SHOULD BE "simba" Disconnected channel and session Process finished with exit code 0 SSHConn.java public class SSHConn { static Session session; static String[] commands = {"whoami", "su - simba", "whoami"}; public static void main(String[] args) throws Exception { open(); runCmd(commands); close(); } public static void runCmd(String[] commands) throws JSchException, IOException { for (String cmd : commands) { System.out.println("Executing command: " + cmd); Channel channel = session.openChannel("exec"); ((ChannelExec) channel).setCommand(cmd); InputStream in = channel.getInputStream(); OutputStream out = channel.getOutputStream(); channel.connect(); //passing creds only when you switch user if (cmd.startsWith("su -")) { System.out.println("Setting suPasswd now...."); out.write((Constants.suPasswd + "\n").getBytes()); out.flush(); } System.out.println("Flushed suPasswd to cli..."); //capture output that we receive from cli (note: some commands such as "su -" does not return anything) if (!cmd.startsWith("su -")) { captureCmdOutput(in, channel); } channel.setInputStream(null); channel.disconnect(); } } public static void captureCmdOutput(InputStream in, Channel channel) throws IOException { System.out.println("Capturing cmdOutput now..."); byte[] tmp = new byte[1024]; while (true) { System.out.println("in the while loop..."); while (in.available() > 0) { System.out.println("into the available loop..."); int i = in.read(tmp, 0, 1024); if (i < 0) { break; } System.out.print(new String(tmp, 0, i)); } if (channel.isClosed()) { break; } try { Thread.sleep(1000); } catch (Exception ee) { System.out.println(ee.getMessage()); } } System.out.println("Command output captured..."); } public static void open() throws JSchException { JSch jSch = new JSch(); session = jSch.getSession(Constants.userId, Constants.host, 22); Properties config = new Properties(); config.put("StrictHostKeyChecking", "no"); session.setConfig(config); session.setPassword(Constants.userPasswd); System.out.println("Connecting SSH to " + Constants.host + " - Please wait for few seconds... "); session.connect(); System.out.println("Connected!\n"); } public static void close() { session.disconnect(); System.out.println("\nDisconnected channel and session"); } } pom.xml <dependency> <groupId>com.jcraft</groupId> <artifactId>jsch</artifactId> <version>0.1.51</version> </dependency> su Command Usage: su [options] [-] [USER [arg]...] Change the effective user id and group id to that of USER. A mere - implies -l. If USER not given, assume root. Options: -m, -p, --preserve-environment do not reset environment variables -g, --group <group> specify the primary group -G, --supp-group <group> specify a supplemental group -, -l, --login make the shell a login shell -c, --command <command> pass a single command to the shell with -c --session-command <command> pass a single command to the shell with -c and do not create a new session -f, --fast pass -f to the shell (for csh or tcsh) -s, --shell <shell> run shell if /etc/shells allows it -h, --help display this help and exit -V, --version output version information and exit For more details see su(1).
How do I switch user with su command in Unix box programatically with Jsch libary? I have a very basic Java program that is using Jsch library to automate execution of some shell scripts located in my Unix box. In order to automate execution of these shell scripts following steps need to be followed: Login into Unix box with user john Switch to another user simba Provide credentials of this new user using OutputStream bypassing the input prompt at cli and flush it using os.flush() Check that the user switch happened using whoami command I wrote following program to achieve this functionality but as you can see in the console output the user switch did NOT happen and the second whoami command gives me the same username (ie: john ) Another question that comes to my mind is what should I do for commands that do not have any output from cli such as su - , logout , bash etc. It doesn't make sense to wait for the output in that infinite while(true) loop when nothing is going to be returned. Please guide. Current Console Output Connecting SSH to my-unix-box.net - Please wait for few seconds... Connected! Executing command: whoami john Executing command: su - simba Setting suPasswd now.... Executing command: whoami john //WRONG: SHOULD BE "simba" Disconnected channel and session Process finished with exit code 0 SSHConn.java public class SSHConn { static Session session; static String[] commands = {"whoami", "su - simba", "whoami"}; public static void main(String[] args) throws Exception { open(); runCmd(commands); close(); } public static void runCmd(String[] commands) throws JSchException, IOException { for (String cmd : commands) { System.out.println("Executing command: " + cmd); Channel channel = session.openChannel("exec"); ((ChannelExec) channel).setCommand(cmd); InputStream in = channel.getInputStream(); OutputStream out = channel.getOutputStream(); channel.connect(); //passing creds only when you switch user if (cmd.startsWith("su -")) { System.out.println("Setting suPasswd now...."); out.write((Constants.suPasswd + "\n").getBytes()); out.flush(); } System.out.println("Flushed suPasswd to cli..."); //capture output that we receive from cli (note: some commands such as "su -" does not return anything) if (!cmd.startsWith("su -")) { captureCmdOutput(in, channel); } channel.setInputStream(null); channel.disconnect(); } } public static void captureCmdOutput(InputStream in, Channel channel) throws IOException { System.out.println("Capturing cmdOutput now..."); byte[] tmp = new byte[1024]; while (true) { System.out.println("in the while loop..."); while (in.available() > 0) { System.out.println("into the available loop..."); int i = in.read(tmp, 0, 1024); if (i < 0) { break; } System.out.print(new String(tmp, 0, i)); } if (channel.isClosed()) { break; } try { Thread.sleep(1000); } catch (Exception ee) { System.out.println(ee.getMessage()); } } System.out.println("Command output captured..."); } public static void open() throws JSchException { JSch jSch = new JSch(); session = jSch.getSession(Constants.userId, Constants.host, 22); Properties config = new Properties(); config.put("StrictHostKeyChecking", "no"); session.setConfig(config); session.setPassword(Constants.userPasswd); System.out.println("Connecting SSH to " + Constants.host + " - Please wait for few seconds... "); session.connect(); System.out.println("Connected!\n"); } public static void close() { session.disconnect(); System.out.println("\nDisconnected channel and session"); } } pom.xml <dependency> <groupId>com.jcraft</groupId> <artifactId>jsch</artifactId> <version>0.1.51</version> </dependency> su Command Usage: su [options] [-] [USER [arg]...] Change the effective user id and group id to that of USER. A mere - implies -l. If USER not given, assume root. Options: -m, -p, --preserve-environment do not reset environment variables -g, --group <group> specify the primary group -G, --supp-group <group> specify a supplemental group -, -l, --login make the shell a login shell -c, --command <command> pass a single command to the shell with -c --session-command <command> pass a single command to the shell with -c and do not create a new session -f, --fast pass -f to the shell (for csh or tcsh) -s, --shell <shell> run shell if /etc/shells allows it -h, --help display this help and exit -V, --version output version information and exit For more details see su(1).
java, ssh, redhat, jsch
2
56
0
https://stackoverflow.com/questions/56466629/how-do-i-switch-user-with-su-command-in-unix-box-programatically-with-jsch-libar
56,440,559
Multiple domains/one Forest RHEL7 with SSSD and REALMD - cannot login to another domain
I have searched on stackoverflow but did not found a solution. I have two domains in one forest (domain1 and domain2). I can login with ssh using domain1 and cannot login with domain2. I can kinit a ticket from domain2. Here are some configs: [sssd] debug_level = 3 services = nss, pam config_file_version = 2 domains = DOMAIN1.TEST.NET, DOMAIN2.TEST.NET [domain/DOMAIN1.TEST.NET] debug_level = 3 override_homedir = /home/%u create_homedir = true override_gid = 100 default_shell = /bin/bash id_provider = ad auth_provider = ad access_provider = ad ldap_id_mapping = true ldap_schema = ad dyndns_update = false ad_gpo_access_control = disabled #ad_enabled_domains = DOMAIN1.TEST.NET, DOMAIN2.TEST.NET ldap_idmap_range_size = 1000000 subdomain_enumerate = all use_fully_qualified_names = false ad_domain = DOMAIN1.TEST.NET [domain/DOMAIN2.TEST.NET] debug_level = 10 override_homedir = /home/%u create_homedir = true override_gid = 100 default_shell = /bin/bash id_provider = ad auth_provider = ad access_provider = ad ldap_id_mapping = true ldap_schema = ad dyndns_update = false ad_gpo_access_control = disabled #ad_enabled_domains = DOMAIN1.TEST.NET, DOMAIN2.TEST.NET ldap_idmap_range_size = 1000000 subdomain_enumerate = all use_fully_qualified_names = false ad_domain = DOMAIN2.TEST.NET [nss] filter_users = root filter_groups = root In the realm list I see the both realms. With kinit from the domain2 I get the ticket. Realm join worked on domain2 with the user from domain1 and when I join he tells me I have already joined. The systemtctl status sssd throws me an error although I can login to the first domain. In the klist -k I see only KEYTAB from the Domain1 and cannot make it to have the domain2 in the keytab. sssd[ldap_child[18103]]][18103]: Failed to initialize credentials using keytab [MEMORY:/etc/krb5.keytab]: Client 'host/server01.domain1.test.net@ST...onnection. sssd_be[17222]: GSSAPI client step 1 ssd_be[17222]: GSSAPI client step 1 [be[DOMAIN1.TEST.NET]][17222]: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Server not found in Kerberos database) There are also some sssd logs from the domain2. Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [fo_set_port_status] (0x0400): Marking port 389 of duplicate server 'atsvtroot1.domain2.test.net' as 'not working' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [sdap_handle_release] (0x2000): Trace: sh[0x55feb6513de0], connected[1], ops[(nil)], ldap[0x55feb64b3e70], destructor_lock[0], release_memory[0] (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [remove_connection_callback] (0x4000): Successfully removed connection callback. (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [sdap_id_op_connect_done] (0x4000): attempting failover retry on op #1 (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [sdap_id_op_connect_step] (0x4000): beginning to connect (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [fo_resolve_service_send] (0x0100): Trying to resolve service 'AD' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [get_server_status] (0x1000): Status of server 'atsvtroot2.domain2.test.net' is 'name resolved' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [get_port_status] (0x1000): Port status of port 389 for server 'atsvtroot2.domain2.test.net' is 'not working' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [get_port_status] (0x0080): SSSD is unable to complete the full connection request, this internal status does not necessarily indicate network port issues. (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [get_server_status] (0x1000): Status of server 'atsvtroot1.domain2.test.net' is 'name resolved' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [get_port_status] (0x1000): Port status of port 389 for server 'atsvtroot1.domain2.test.net' is 'not working' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [get_port_status] (0x0080): SSSD is unable to complete the full connection request, this internal status does not necessarily indicate network port issues. (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [fo_resolve_service_send] (0x0020): No available servers for service 'AD' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [sdap_id_release_conn_data] (0x4000): releasing unused connection (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_resolve_server_done] (0x1000): Server resolution failed: [5]: Input/output error (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [sdap_id_op_connect_done] (0x0020): Failed to connect, going offline (5 [Input/output error]) (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_mark_offline] (0x2000): Going offline! (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_mark_offline] (0x2000): Enable check_if_online_ptask. (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_ptask_enable] (0x0400): Task [Check if online (periodic)]: enabling task (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_ptask_schedule] (0x0400): Task [Check if online (periodic)]: scheduling task 67 seconds from now [1559627215] (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_run_offline_cb] (0x0080): Going offline. Running callbacks. (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [sdap_id_op_connect_done] (0x4000): notify offline to op #1 (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [ad_subdomains_refresh_connect_done] (0x0020): Unable to connect to LDAP [11]: Resource temporarily unavailable (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [ad_subdomains_refresh_connect_done] (0x0080): No AD server is available, cannot get the subdomain list while offline (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_ptask_done] (0x0040): Task [Subdomains Refresh]: failed with [1432158212]: SSSD is offline (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [be_ptask_execute] (0x0400): Task [Subdomains Refresh]: executing task, timeout 14400 seconds (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [fo_resolve_service_send] (0x0100): Trying to resolve service 'AD' (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [set_server_common_status] (0x0100): Marking server '10.51.51.222' as 'resolving name' (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [set_server_common_status] (0x0100): Marking server '10.x.x.x.' as 'name resolved' (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [be_resolve_server_process] (0x0200): Found address for server 10.x.x.x.x: [10.51.51.222] TTL 7200 (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sssd_async_socket_init_send] (0x0400): Setting 6 seconds timeout for connecting (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [(objectclass=*)][]. (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no errmsg set (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_get_server_opts_from_rootdse] (0x0100): Setting AD compatibility level to [4] (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_get_server_opts_from_rootdse] (0x0100): Will look for schema at [CN=Schema,CN=Configuration,DC=domain1,DC=test,DC=net] (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_kinit_send] (0x0400): Attempting kinit (default, host/server01.domain1.test.net, domain1.test.net, 86400) (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [fo_resolve_service_send] (0x0100): Trying to resolve service 'AD' (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [be_resolve_server_process] (0x0200): Found address for server 10.x.x.x.x.: [10.x.x.x.x] TTL 7200 (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [create_tgt_req_send_buffer] (0x0400): buffer size: 68 (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [set_tgt_child_timeout] (0x0400): Setting 6 seconds timeout for TGT child (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [write_pipe_handler] (0x0400): All data has been sent! (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [child_sig_handler] (0x0100): child [18330] finished successfully. (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [read_pipe_handler] (0x0400): EOF received, client finished (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_get_tgt_recv] (0x0400): Child responded: 14 [Client 'host/server01.domain1.test.net@DOMAIN1.TEST.NET' not found in Kerberos database], expired on [0] (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_kinit_done] (0x0100): Could not get TGT: 14 [Bad address] (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_cli_kinit_done] (0x0400): Cannot get a TGT: ret [1432158226](Authentication Failed) (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_cli_connect_recv] (0x0040): Unable to establish connection [13]: Permission denied In the krb5.conf I have all the REALMs inside. What am I missing. Why cannot I login with SSH. Thanks in advance.
Multiple domains/one Forest RHEL7 with SSSD and REALMD - cannot login to another domain I have searched on stackoverflow but did not found a solution. I have two domains in one forest (domain1 and domain2). I can login with ssh using domain1 and cannot login with domain2. I can kinit a ticket from domain2. Here are some configs: [sssd] debug_level = 3 services = nss, pam config_file_version = 2 domains = DOMAIN1.TEST.NET, DOMAIN2.TEST.NET [domain/DOMAIN1.TEST.NET] debug_level = 3 override_homedir = /home/%u create_homedir = true override_gid = 100 default_shell = /bin/bash id_provider = ad auth_provider = ad access_provider = ad ldap_id_mapping = true ldap_schema = ad dyndns_update = false ad_gpo_access_control = disabled #ad_enabled_domains = DOMAIN1.TEST.NET, DOMAIN2.TEST.NET ldap_idmap_range_size = 1000000 subdomain_enumerate = all use_fully_qualified_names = false ad_domain = DOMAIN1.TEST.NET [domain/DOMAIN2.TEST.NET] debug_level = 10 override_homedir = /home/%u create_homedir = true override_gid = 100 default_shell = /bin/bash id_provider = ad auth_provider = ad access_provider = ad ldap_id_mapping = true ldap_schema = ad dyndns_update = false ad_gpo_access_control = disabled #ad_enabled_domains = DOMAIN1.TEST.NET, DOMAIN2.TEST.NET ldap_idmap_range_size = 1000000 subdomain_enumerate = all use_fully_qualified_names = false ad_domain = DOMAIN2.TEST.NET [nss] filter_users = root filter_groups = root In the realm list I see the both realms. With kinit from the domain2 I get the ticket. Realm join worked on domain2 with the user from domain1 and when I join he tells me I have already joined. The systemtctl status sssd throws me an error although I can login to the first domain. In the klist -k I see only KEYTAB from the Domain1 and cannot make it to have the domain2 in the keytab. sssd[ldap_child[18103]]][18103]: Failed to initialize credentials using keytab [MEMORY:/etc/krb5.keytab]: Client 'host/server01.domain1.test.net@ST...onnection. sssd_be[17222]: GSSAPI client step 1 ssd_be[17222]: GSSAPI client step 1 [be[DOMAIN1.TEST.NET]][17222]: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Server not found in Kerberos database) There are also some sssd logs from the domain2. Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [fo_set_port_status] (0x0400): Marking port 389 of duplicate server 'atsvtroot1.domain2.test.net' as 'not working' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [sdap_handle_release] (0x2000): Trace: sh[0x55feb6513de0], connected[1], ops[(nil)], ldap[0x55feb64b3e70], destructor_lock[0], release_memory[0] (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [remove_connection_callback] (0x4000): Successfully removed connection callback. (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [sdap_id_op_connect_done] (0x4000): attempting failover retry on op #1 (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [sdap_id_op_connect_step] (0x4000): beginning to connect (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [fo_resolve_service_send] (0x0100): Trying to resolve service 'AD' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [get_server_status] (0x1000): Status of server 'atsvtroot2.domain2.test.net' is 'name resolved' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [get_port_status] (0x1000): Port status of port 389 for server 'atsvtroot2.domain2.test.net' is 'not working' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [get_port_status] (0x0080): SSSD is unable to complete the full connection request, this internal status does not necessarily indicate network port issues. (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [get_server_status] (0x1000): Status of server 'atsvtroot1.domain2.test.net' is 'name resolved' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [get_port_status] (0x1000): Port status of port 389 for server 'atsvtroot1.domain2.test.net' is 'not working' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [get_port_status] (0x0080): SSSD is unable to complete the full connection request, this internal status does not necessarily indicate network port issues. (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [fo_resolve_service_send] (0x0020): No available servers for service 'AD' (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [sdap_id_release_conn_data] (0x4000): releasing unused connection (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_resolve_server_done] (0x1000): Server resolution failed: [5]: Input/output error (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [sdap_id_op_connect_done] (0x0020): Failed to connect, going offline (5 [Input/output error]) (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_mark_offline] (0x2000): Going offline! (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_mark_offline] (0x2000): Enable check_if_online_ptask. (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_ptask_enable] (0x0400): Task [Check if online (periodic)]: enabling task (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_ptask_schedule] (0x0400): Task [Check if online (periodic)]: scheduling task 67 seconds from now [1559627215] (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_run_offline_cb] (0x0080): Going offline. Running callbacks. (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [sdap_id_op_connect_done] (0x4000): notify offline to op #1 (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [ad_subdomains_refresh_connect_done] (0x0020): Unable to connect to LDAP [11]: Resource temporarily unavailable (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [ad_subdomains_refresh_connect_done] (0x0080): No AD server is available, cannot get the subdomain list while offline (Tue Jun 4 07:45:48 2019) [sssd[be[DOMAIN2.TEST.NET]]] [be_ptask_done] (0x0040): Task [Subdomains Refresh]: failed with [1432158212]: SSSD is offline (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [be_ptask_execute] (0x0400): Task [Subdomains Refresh]: executing task, timeout 14400 seconds (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [fo_resolve_service_send] (0x0100): Trying to resolve service 'AD' (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [set_server_common_status] (0x0100): Marking server '10.51.51.222' as 'resolving name' (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [set_server_common_status] (0x0100): Marking server '10.x.x.x.' as 'name resolved' (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [be_resolve_server_process] (0x0200): Found address for server 10.x.x.x.x: [10.51.51.222] TTL 7200 (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sssd_async_socket_init_send] (0x0400): Setting 6 seconds timeout for connecting (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [(objectclass=*)][]. (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no errmsg set (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_get_server_opts_from_rootdse] (0x0100): Setting AD compatibility level to [4] (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_get_server_opts_from_rootdse] (0x0100): Will look for schema at [CN=Schema,CN=Configuration,DC=domain1,DC=test,DC=net] (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_kinit_send] (0x0400): Attempting kinit (default, host/server01.domain1.test.net, domain1.test.net, 86400) (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [fo_resolve_service_send] (0x0100): Trying to resolve service 'AD' (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [be_resolve_server_process] (0x0200): Found address for server 10.x.x.x.x.: [10.x.x.x.x] TTL 7200 (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [create_tgt_req_send_buffer] (0x0400): buffer size: 68 (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [set_tgt_child_timeout] (0x0400): Setting 6 seconds timeout for TGT child (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [write_pipe_handler] (0x0400): All data has been sent! (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [child_sig_handler] (0x0100): child [18330] finished successfully. (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [read_pipe_handler] (0x0400): EOF received, client finished (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_get_tgt_recv] (0x0400): Child responded: 14 [Client 'host/server01.domain1.test.net@DOMAIN1.TEST.NET' not found in Kerberos database], expired on [0] (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_kinit_done] (0x0100): Could not get TGT: 14 [Bad address] (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_cli_kinit_done] (0x0400): Cannot get a TGT: ret [1432158226](Authentication Failed) (Tue Jun 4 10:48:15 2019) [sssd[be[domain2.test.net]]] [sdap_cli_connect_recv] (0x0040): Unable to establish connection [13]: Permission denied In the krb5.conf I have all the REALMs inside. What am I missing. Why cannot I login with SSH. Thanks in advance.
active-directory, redhat, sssd
2
5,446
1
https://stackoverflow.com/questions/56440559/multiple-domains-one-forest-rhel7-with-sssd-and-realmd-cannot-login-to-another
56,007,546
Install python-dev with no root permission
I am trying to install this software [URL] but it needs python-dev. However, I don't have root permission I have tried some ways including create virtualenv but there is not specific answer how to install this python-dev. My Linux system is RedHat Here is the code: python setup.py install --user Here is the error pyBigWig.c:1:20: fatal error: Python.h: No such file or directory #include <Python.h> ^ compilation terminated. error: Setup script exited with error: command 'gcc' failed with exit status 1
Install python-dev with no root permission I am trying to install this software [URL] but it needs python-dev. However, I don't have root permission I have tried some ways including create virtualenv but there is not specific answer how to install this python-dev. My Linux system is RedHat Here is the code: python setup.py install --user Here is the error pyBigWig.c:1:20: fatal error: Python.h: No such file or directory #include <Python.h> ^ compilation terminated. error: Setup script exited with error: command 'gcc' failed with exit status 1
python, redhat
2
312
0
https://stackoverflow.com/questions/56007546/install-python-dev-with-no-root-permission
55,280,211
Writing to page mapped dmas in kernel
I've been working on modifying the intel ixgbe kernel driver to function with my PCIe device (FPGA but that's not super important). The kernel and the PCIe device all negotiate quite well, configuration headers are passed along and communication seems to function. However attempting to write DMA_FROM_DEVICE I have a slight problem that I don't understand and I'm hoping for help. rx_ring->desc = dma_alloc_coherent(dev, ///This function allocates dma space of size size for handle dma on device dev with flag GFP KERNEL rx_ring->size, &rx_ring->dma, ///This dma handle may be cast to unsigned integer of the same bus width and given to dev as the DMA base address GFP_KERNEL); page = dev_alloc_pages(0); dma = dma_map_page(rx_ring->dev, page, 0, acc_rx_pg_size(rx_ring), DMA_FROM_DEVICE); //Writing to the PCI device the base address to place data into. writel(q_vector->adapter->rx_ring[0]->dma >> 32, q_vector->adapter->hw_region2.hw_addr+0x08+ACC_PCI_IPCONT_DATA_OFFSET); writel(q_vector->adapter->rx_ring[0]->dma & 0xFFFFFFFF, q_vector->adapter->hw_region2.hw_addr+0x0C+ACC_PCI_IPCONT_DATA_OFFSET); //This will perfectly read data I place onto the PCIe bus. rx_ring->desc->wb.upper.length //This seems to read some garbage memory. dma_sync_single_range_for_cpu(rx_ring->dev, rx_buffer->dma, rx_buffer->page_offset, acc_rx_bufsz(rx_ring), DMA_FROM_DEVICE); unsigned char *va = page_address(page) + rx_buffer->page_offset; memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long))); //Some code later dma_sync_single_range_for_device(rx_ring->dev, new_buff->dma, new_buff->page_offset, acc_rx_bufsz(rx_ring), DMA_FROM_DEVICE); I've tried to purge code down to just the points of interest but here's the brief run down. I allocate space for the dma creating the virtual and bus address via the dma_alloc_coherent function. I create a page of memory for the dma and map this page to the dma via the dev_alloc_pages and dma_map_page commands. I pass the dma bus address to my PCIe device so it can write to the proper offset via the writel commands (I know iowrite32 but this is on redhat). From here there are 2 ways that the origonal ixgbe driver reads data from the PCIe bus. First it directly reads from the dma's allocated virtual address (desc), but this is only used for configuration information (in the driver I am working off of). The second method is via use page_address(page) to I believe get a virtual address for the page of memory. The problem is there is only garbage memory there. So here is my confusion. Where is page pointing to and how do I place data into page via the PCI bus? I assumed that dma_map_page would sort of merge the 2 virtual addresses into 1 so my write into the dma's bus address would collide into the page but this doesn't seem to be the case. What base address should my PCI device be writing from to align into this page of memory? I'm working on redhat, specifically Centos kernel version 3.10.0 which makes for some problems since redhat kernel is very different from base kernel but hopefully someone can help. Thank you for any pointers. EDIT: Added dma_sync calls which I forgot to include in original post. EDIT2: Added a more complete code base. As a note I'm still not including some of the struct definitions or top function calls (like probe for instance), but hopefully this will be a lot more complete. Sorry for how long it is. //These functions are called during configuration int acc_setup_rx_resources(struct acc_ring *rx_ring) { struct device *dev = rx_ring->dev; int orig_node = dev_to_node(dev); int numa_node = -1; int size; size = sizeof(struct acc_rx_buffer) * rx_ring->count; if (rx_ring->q_vector) numa_node = rx_ring->q_vector->numa_node; rx_ring->rx_buffer_info = vzalloc_node(size, numa_node); if (!rx_ring->rx_buffer_info) rx_ring->rx_buffer_info = vzalloc(size); if (!rx_ring->rx_buffer_info) goto err; /* Round up to nearest 4K */ rx_ring->size = rx_ring->count * sizeof(union acc_adv_rx_desc); rx_ring->size = ALIGN(rx_ring->size, 4096); set_dev_node(dev, numa_node); rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size, &rx_ring->dma, GFP_KERNEL); set_dev_node(dev, orig_node); if (!rx_ring->desc) rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size, &rx_ring->dma, GFP_KERNEL); if (!rx_ring->desc) goto err; rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; return 0; err: vfree(rx_ring->rx_buffer_info); rx_ring->rx_buffer_info = NULL; dev_err(dev, "Unable to allocate memory for the Rx descriptor ring\n"); return -ENOMEM; } static bool acc_alloc_mapped_page(struct acc_ring *rx_ring, struct acc_rx_buffer *bi) { struct page *page = bi->page; dma_addr_t dma = bi->dma; if (likely(page)) return true; page = dev_alloc_pages(0); if(unlikely(!page)){ rx_ring->rx_stats.alloc_rx_page_failed++; return false; } /* map page for use */ dma = dma_map_page(rx_ring->dev, page, 0, acc_rx_pg_size(rx_ring), DMA_FROM_DEVICE); if (dma_mapping_error(rx_ring->dev, dma)) { __free_pages(page, acc_rx_pg_order(rx_ring)); bi->page = NULL; rx_ring->rx_stats.alloc_rx_page_failed++; return false; } bi->dma = dma; bi->page = page; bi->page_offset = 0; page_ref_add(page, USHRT_MAX - 1); //This seems to exist in redhat kernel but not 3.10 base kernel... keep? return true; } void acc_alloc_rx_buffers(struct acc_ring *rx_ring, u16 cleaned_count) { union acc_adv_rx_desc *rx_desc; struct acc_rx_buffer *bi; u16 i = rx_ring->next_to_use; printk(KERN_INFO "acc Attempting to allocate rx buffers\n"); /* nothing to do */ if (!cleaned_count) return; rx_desc = ACC_RX_DESC(rx_ring, i); bi = &rx_ring->rx_buffer_info[i]; i -= rx_ring->count; do { if (!acc_alloc_mapped_page(rx_ring, bi)){ printk(KERN_INFO "acc Failed to allocate and map the page to dma\n"); break; } printk(KERN_INFO "acc happily allocated and mapped page to dma\n"); /* * Refresh the desc even if buffer_addrs didn't change * because each write-back erases this info. */ rx_desc->read.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset); rx_desc++; bi++; ///Move to the next buffer i++; if (unlikely(!i)) { rx_desc = ACC_RX_DESC(rx_ring, 0); bi = rx_ring->rx_buffer_info; i -= rx_ring->count; } /* clear the hdr_addr for the next_to_use descriptor */ rx_desc->read.hdr_addr = 0; cleaned_count--; } while (cleaned_count); i += rx_ring->count; if (rx_ring->next_to_use != i) acc_release_rx_desc(rx_ring, i); } //This function is called via a napi_schedule command which fires when an MSI interrupt is thrown from my PCIe device (all works fine). int acc_poll(struct napi_struct *napi, int budget) { struct acc_q_vector *q_vector = container_of(napi, struct acc_q_vector, napi); struct acc_adapter *adapter = q_vector->adapter; struct acc_ring *ring; int per_ring_budget; bool clean_complete = true; e_dev_info("Landed in acc_poll\n"); e_dev_info("Attempting to read register space 0x00=%x\t0x04=%x\n", \ readl(q_vector->adapter->hw.hw_addr), readl(q_vector->adapter->hw.hw_addr+0x04)); e_dev_info("Attempting to write to pci ctl\n"); e_dev_info("Target address %.8x%.8x\n",q_vector->adapter->rx_ring[0]->dma >> 32, q_vector->adapter->rx_ring[0]->dma & 0xFFFFFFFF); e_dev_info("Attempted page address %.8x%.8x\n",virt_to_bus(page_address(q_vector->adapter->rx_ring[0]->rx_buffer_info[0].page)) >> 32, virt_to_bus(page_address(q_vector->adapter->rx_ring[0]->rx_buffer_info[0].page)) & 0xFFFFFFFF); writeq(0x0000000000000001, q_vector->adapter->hw_region2.hw_addr+ACC_PCI_IPCONT_DATA_OFFSET); //These are supposed to be iowrite64 but it seems iowrite64 is different in redhat and only supports the copy function (to,from,size). yay redhat think different. writel(q_vector->adapter->rx_ring[0]->dma >> 32, q_vector->adapter->hw_region2.hw_addr+0x08+ACC_PCI_IPCONT_DATA_OFFSET); writel(q_vector->adapter->rx_ring[0]->dma & 0xFFFFFFFF, q_vector->adapter->hw_region2.hw_addr+0x0C+ACC_PCI_IPCONT_DATA_OFFSET); writel(virt_to_bus(page_address(q_vector->adapter->rx_ring[0]->rx_buffer_info[0].page)) >> 32, q_vector->adapter->hw_region2.hw_addr+0x10+ACC_PCI_IPCONT_DATA_OFFSET); writel(virt_to_bus(page_address(q_vector->adapter->rx_ring[0]->rx_buffer_info[0].page)) & 0xFFFFFFFF, q_vector->adapter->hw_region2.hw_addr+0x14+ACC_PCI_IPCONT_DATA_OFFSET); writeq(0xFF00000000000000, q_vector->adapter->hw_region2.hw_addr+0x18+ACC_PCI_IPCONT_DATA_OFFSET); writeq(0x0000000CC0000000, q_vector->adapter->hw_region2.hw_addr+0x20+ACC_PCI_IPCONT_DATA_OFFSET); writeq(0x0000000CC0000000, q_vector->adapter->hw_region2.hw_addr+0x28+ACC_PCI_IPCONT_DATA_OFFSET); writeq(0x0003344000005500, q_vector->adapter->hw_region2.hw_addr+0x30+ACC_PCI_IPCONT_DATA_OFFSET); //Send the start command to the block writeq(0x0000000000000001, q_vector->adapter->hw_region2.hw_addr); acc_for_each_ring(ring, q_vector->tx) clean_complete &= !!acc_clean_tx_irq(q_vector, ring); if (q_vector->rx.count > 1) per_ring_budget = max(budget/q_vector->rx.count, 1); else per_ring_budget = budget; acc_for_each_ring(ring, q_vector->rx){ e_dev_info("Calling clean_rx_irq\n"); clean_complete &= acc_clean_rx_irq(q_vector, ring, per_ring_budget); } /* If all work not completed, return budget and keep polling */ if (!clean_complete) return budget; e_dev_info("Clean complete\n"); /* all work done, exit the polling mode */ napi_complete(napi); if (adapter->rx_itr_setting & 1) acc_set_itr(q_vector); if (!test_bit(__ACC_DOWN, &adapter->state)) acc_irq_enable_queues(adapter, ((u64)1 << q_vector->v_idx)); e_dev_info("Exiting acc_poll\n"); return 0; } static bool acc_clean_rx_irq(struct acc_q_vector *q_vector, struct acc_ring *rx_ring, const int budget) { printk(KERN_INFO "acc Entered clean_rx_irq\n"); unsigned int total_rx_bytes = 0, total_rx_packets = 0; u16 cleaned_count = acc_desc_unused(rx_ring); /// First pass this is count-1 because ntc and ntu are 0 so this is 512-1=511 printk(KERN_INFO "acc RX irq Clean count = %d\n", cleaned_count); do { union acc_adv_rx_desc *rx_desc; struct sk_buff *skb; /* return some buffers to hardware, one at a time is too slow */ if (cleaned_count >= ACC_RX_BUFFER_WRITE) { //When the clean count is >16 allocate some more buffers to get the clean count down. First pass this happens. acc_alloc_rx_buffers(rx_ring, cleaned_count); cleaned_count = 0; } rx_desc = ACC_RX_DESC(rx_ring, rx_ring->next_to_clean); printk(KERN_INFO "acc inside RX do while, acquired description\n"); printk(KERN_INFO "acc Everything I can about the rx_ring desc (acc_rx_buffer). status_error=%d\t \ length=%d\n", rx_desc->wb.upper.status_error, rx_desc->wb.upper.length); if (!acc_test_staterr(rx_desc, ACC_RXD_STAT_DD)) break; printk(KERN_INFO "acc inside RX past status_error check\n"); /* * This memory barrier is needed to keep us from reading * any other fields out of the rx_desc until we know the * RXD_STAT_DD bit is set */ rmb(); /* retrieve a buffer from the ring */ skb = acc_fetch_rx_buffer(rx_ring, rx_desc); /* exit if we failed to retrieve a buffer */ if (!skb) break; printk(KERN_INFO "acc successfully retrieved a buffer\n"); cleaned_count++; /* place incomplete frames back on ring for completion */ if (acc_is_non_eop(rx_ring, rx_desc, skb)) continue; /* verify the packet layout is correct */ if (acc_cleanup_headers(rx_ring, rx_desc, skb)) continue; /* probably a little skewed due to removing CRC */ total_rx_bytes += skb->len; /* populate checksum, timestamp, VLAN, and protocol */ acc_process_skb_fields(rx_ring, rx_desc, skb); acc_rx_skb(q_vector, skb); ///I believe this sends data to the kernel network stuff and then the generic OS /* update budget accounting */ total_rx_packets++; } while (likely(total_rx_packets < budget)); printk(KERN_INFO "acc rx irq exited the while loop\n"); u64_stats_update_begin(&rx_ring->syncp); rx_ring->stats.packets += total_rx_packets; rx_ring->stats.bytes += total_rx_bytes; u64_stats_update_end(&rx_ring->syncp); q_vector->rx.total_packets += total_rx_packets; q_vector->rx.total_bytes += total_rx_bytes; if (cleaned_count) acc_alloc_rx_buffers(rx_ring, cleaned_count); printk(KERN_INFO "acc rx irq returning happily\n"); return (total_rx_packets < budget); } static struct sk_buff *acc_fetch_rx_buffer(struct acc_ring *rx_ring, union acc_adv_rx_desc *rx_desc) { struct acc_rx_buffer *rx_buffer; struct sk_buff *skb; struct page *page; printk(KERN_INFO "acc Attempting to fetch rx buffer\n"); rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; page = rx_buffer->page; //This page is set by I think acc_add_rx_frag... hard to tell. yes the page is created there and kind of linked to the dma via dma_map_page prefetchw(page); ///Prefetch the page cacheline for writing skb = rx_buffer->skb; ///This does the mapping between skb and dma page table I believe. if (likely(!skb)) { printk(KERN_INFO "acc attempting to allocate netdrv space for page.\n"); void *page_addr = page_address(page) + //get the virtual page address of this page. rx_buffer->page_offset; /* prefetch first cache line of first page */ prefetch(page_addr); #if L1_CACHE_BYTES < 128 prefetch(page_addr + L1_CACHE_BYTES); #endif /* allocate a skb to store the frags */ skb = netdev_alloc_skb_ip_align(rx_ring->netdev, ACC_RX_HDR_SIZE); if (unlikely(!skb)) { rx_ring->rx_stats.alloc_rx_buff_failed++; return NULL; } /* * we will be copying header into skb->data in * pskb_may_pull so it is in our interest to prefetch * it now to avoid a possible cache miss */ prefetchw(skb->data); /* * Delay unmapping of the first packet. It carries the * header information, HW may still access the header * after the writeback. Only unmap it when EOP is * reached */ if (likely((rx_desc, ACC_RXD_STAT_EOP))) goto dma_sync; ACC_CB(skb)->dma = rx_buffer->dma; } else { if (acc_test_staterr(rx_desc, ACC_RXD_STAT_EOP)) acc_dma_sync_frag(rx_ring, skb); dma_sync: /* we are reusing so sync this buffer for CPU use */ printk(KERN_INFO "acc attempting to sync the dma and the device.\n"); dma_sync_single_range_for_cpu(rx_ring->dev, //Sync to the pci device, this dma buffer, at this page offset, this ring, for device to DMA transfer rx_buffer->dma, rx_buffer->page_offset, acc_rx_bufsz(rx_ring), DMA_FROM_DEVICE); } /* pull page into skb */ if (acc_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) { //This is again temporary to try and create blockers around the problem. return skb; /* hand second half of page back to the ring */ acc_reuse_rx_page(rx_ring, rx_buffer); } else if (ACC_CB(skb)->dma == rx_buffer->dma) { /* the page has been released from the ring */ ACC_CB(skb)->page_released = true; } else { /* we are not reusing the buffer so unmap it */ dma_unmap_page(rx_ring->dev, rx_buffer->dma, acc_rx_pg_size(rx_ring), DMA_FROM_DEVICE); } /* clear contents of buffer_info */ rx_buffer->skb = NULL; rx_buffer->dma = 0; rx_buffer->page = NULL; printk(KERN_INFO "acc returning from fetch_rx_buffer.\n"); return skb; } static bool acc_add_rx_frag(struct acc_ring *rx_ring, struct acc_rx_buffer *rx_buffer, union acc_adv_rx_desc *rx_desc, struct sk_buff *skb) { printk(KERN_INFO "acc Attempting to add rx_frag from page.\n"); struct page *page = rx_buffer->page; unsigned int size = le16_to_cpu(rx_desc->wb.upper.length); #if (PAGE_SIZE < 8192) unsigned int truesize = acc_rx_bufsz(rx_ring); #else unsigned int truesize = ALIGN(size, L1_CACHE_BYTES); unsigned int last_offset = acc_rx_pg_size(rx_ring) - acc_rx_bufsz(rx_ring); #endif if ((size <= ACC_RX_HDR_SIZE) && !skb_is_nonlinear(skb)) { printk(KERN_INFO "acc Inside the size check.\n"); unsigned char *va = page_address(page) + rx_buffer->page_offset; printk(KERN_INFO "page:%p\tpage_address:%p\tpage_offset:%d\n",page,page_address(page),rx_buffer->page_offset); printk(KERN_INFO "acc First 4 bytes of string:%x %x %x %x\n",va[0],va[1],va[2],va[3]); //FIXME: I can now read this page table but there is still no meaningful data in it. (appear to be reading garbage) printk(KERN_INFO "acc 32 bytes in:%x %x %x %x\n",va[32],va[33],va[34],va[35]); return true; memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long))); /* we can reuse buffer as-is, just make sure it is local */ if (likely(page_to_nid(page) == numa_node_id())) return true; /* this page cannot be reused so discard it */ put_page(page); return false; } skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, rx_buffer->page_offset, size, truesize); /* avoid re-using remote pages */ if (unlikely(page_to_nid(page) != numa_node_id())) return false; #if (PAGE_SIZE < 8192) /* if we are only owner of page we can reuse it */ if (unlikely(page_count(page) != 1)) return false; /* flip page offset to other buffer */ rx_buffer->page_offset ^= truesize; /* * since we are the only owner of the page and we need to * increment it, just set the value to 2 in order to avoid * an unecessary locked operation */ atomic_set(&page->_count, 2); #else /* move offset up to the next cache line */ rx_buffer->page_offset += truesize; if (rx_buffer->page_offset > last_offset) return false; /* bump ref count on page before it is given to the stack */ get_page(page); #endif return true; }
Writing to page mapped dmas in kernel I've been working on modifying the intel ixgbe kernel driver to function with my PCIe device (FPGA but that's not super important). The kernel and the PCIe device all negotiate quite well, configuration headers are passed along and communication seems to function. However attempting to write DMA_FROM_DEVICE I have a slight problem that I don't understand and I'm hoping for help. rx_ring->desc = dma_alloc_coherent(dev, ///This function allocates dma space of size size for handle dma on device dev with flag GFP KERNEL rx_ring->size, &rx_ring->dma, ///This dma handle may be cast to unsigned integer of the same bus width and given to dev as the DMA base address GFP_KERNEL); page = dev_alloc_pages(0); dma = dma_map_page(rx_ring->dev, page, 0, acc_rx_pg_size(rx_ring), DMA_FROM_DEVICE); //Writing to the PCI device the base address to place data into. writel(q_vector->adapter->rx_ring[0]->dma >> 32, q_vector->adapter->hw_region2.hw_addr+0x08+ACC_PCI_IPCONT_DATA_OFFSET); writel(q_vector->adapter->rx_ring[0]->dma & 0xFFFFFFFF, q_vector->adapter->hw_region2.hw_addr+0x0C+ACC_PCI_IPCONT_DATA_OFFSET); //This will perfectly read data I place onto the PCIe bus. rx_ring->desc->wb.upper.length //This seems to read some garbage memory. dma_sync_single_range_for_cpu(rx_ring->dev, rx_buffer->dma, rx_buffer->page_offset, acc_rx_bufsz(rx_ring), DMA_FROM_DEVICE); unsigned char *va = page_address(page) + rx_buffer->page_offset; memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long))); //Some code later dma_sync_single_range_for_device(rx_ring->dev, new_buff->dma, new_buff->page_offset, acc_rx_bufsz(rx_ring), DMA_FROM_DEVICE); I've tried to purge code down to just the points of interest but here's the brief run down. I allocate space for the dma creating the virtual and bus address via the dma_alloc_coherent function. I create a page of memory for the dma and map this page to the dma via the dev_alloc_pages and dma_map_page commands. I pass the dma bus address to my PCIe device so it can write to the proper offset via the writel commands (I know iowrite32 but this is on redhat). From here there are 2 ways that the origonal ixgbe driver reads data from the PCIe bus. First it directly reads from the dma's allocated virtual address (desc), but this is only used for configuration information (in the driver I am working off of). The second method is via use page_address(page) to I believe get a virtual address for the page of memory. The problem is there is only garbage memory there. So here is my confusion. Where is page pointing to and how do I place data into page via the PCI bus? I assumed that dma_map_page would sort of merge the 2 virtual addresses into 1 so my write into the dma's bus address would collide into the page but this doesn't seem to be the case. What base address should my PCI device be writing from to align into this page of memory? I'm working on redhat, specifically Centos kernel version 3.10.0 which makes for some problems since redhat kernel is very different from base kernel but hopefully someone can help. Thank you for any pointers. EDIT: Added dma_sync calls which I forgot to include in original post. EDIT2: Added a more complete code base. As a note I'm still not including some of the struct definitions or top function calls (like probe for instance), but hopefully this will be a lot more complete. Sorry for how long it is. //These functions are called during configuration int acc_setup_rx_resources(struct acc_ring *rx_ring) { struct device *dev = rx_ring->dev; int orig_node = dev_to_node(dev); int numa_node = -1; int size; size = sizeof(struct acc_rx_buffer) * rx_ring->count; if (rx_ring->q_vector) numa_node = rx_ring->q_vector->numa_node; rx_ring->rx_buffer_info = vzalloc_node(size, numa_node); if (!rx_ring->rx_buffer_info) rx_ring->rx_buffer_info = vzalloc(size); if (!rx_ring->rx_buffer_info) goto err; /* Round up to nearest 4K */ rx_ring->size = rx_ring->count * sizeof(union acc_adv_rx_desc); rx_ring->size = ALIGN(rx_ring->size, 4096); set_dev_node(dev, numa_node); rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size, &rx_ring->dma, GFP_KERNEL); set_dev_node(dev, orig_node); if (!rx_ring->desc) rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size, &rx_ring->dma, GFP_KERNEL); if (!rx_ring->desc) goto err; rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; return 0; err: vfree(rx_ring->rx_buffer_info); rx_ring->rx_buffer_info = NULL; dev_err(dev, "Unable to allocate memory for the Rx descriptor ring\n"); return -ENOMEM; } static bool acc_alloc_mapped_page(struct acc_ring *rx_ring, struct acc_rx_buffer *bi) { struct page *page = bi->page; dma_addr_t dma = bi->dma; if (likely(page)) return true; page = dev_alloc_pages(0); if(unlikely(!page)){ rx_ring->rx_stats.alloc_rx_page_failed++; return false; } /* map page for use */ dma = dma_map_page(rx_ring->dev, page, 0, acc_rx_pg_size(rx_ring), DMA_FROM_DEVICE); if (dma_mapping_error(rx_ring->dev, dma)) { __free_pages(page, acc_rx_pg_order(rx_ring)); bi->page = NULL; rx_ring->rx_stats.alloc_rx_page_failed++; return false; } bi->dma = dma; bi->page = page; bi->page_offset = 0; page_ref_add(page, USHRT_MAX - 1); //This seems to exist in redhat kernel but not 3.10 base kernel... keep? return true; } void acc_alloc_rx_buffers(struct acc_ring *rx_ring, u16 cleaned_count) { union acc_adv_rx_desc *rx_desc; struct acc_rx_buffer *bi; u16 i = rx_ring->next_to_use; printk(KERN_INFO "acc Attempting to allocate rx buffers\n"); /* nothing to do */ if (!cleaned_count) return; rx_desc = ACC_RX_DESC(rx_ring, i); bi = &rx_ring->rx_buffer_info[i]; i -= rx_ring->count; do { if (!acc_alloc_mapped_page(rx_ring, bi)){ printk(KERN_INFO "acc Failed to allocate and map the page to dma\n"); break; } printk(KERN_INFO "acc happily allocated and mapped page to dma\n"); /* * Refresh the desc even if buffer_addrs didn't change * because each write-back erases this info. */ rx_desc->read.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset); rx_desc++; bi++; ///Move to the next buffer i++; if (unlikely(!i)) { rx_desc = ACC_RX_DESC(rx_ring, 0); bi = rx_ring->rx_buffer_info; i -= rx_ring->count; } /* clear the hdr_addr for the next_to_use descriptor */ rx_desc->read.hdr_addr = 0; cleaned_count--; } while (cleaned_count); i += rx_ring->count; if (rx_ring->next_to_use != i) acc_release_rx_desc(rx_ring, i); } //This function is called via a napi_schedule command which fires when an MSI interrupt is thrown from my PCIe device (all works fine). int acc_poll(struct napi_struct *napi, int budget) { struct acc_q_vector *q_vector = container_of(napi, struct acc_q_vector, napi); struct acc_adapter *adapter = q_vector->adapter; struct acc_ring *ring; int per_ring_budget; bool clean_complete = true; e_dev_info("Landed in acc_poll\n"); e_dev_info("Attempting to read register space 0x00=%x\t0x04=%x\n", \ readl(q_vector->adapter->hw.hw_addr), readl(q_vector->adapter->hw.hw_addr+0x04)); e_dev_info("Attempting to write to pci ctl\n"); e_dev_info("Target address %.8x%.8x\n",q_vector->adapter->rx_ring[0]->dma >> 32, q_vector->adapter->rx_ring[0]->dma & 0xFFFFFFFF); e_dev_info("Attempted page address %.8x%.8x\n",virt_to_bus(page_address(q_vector->adapter->rx_ring[0]->rx_buffer_info[0].page)) >> 32, virt_to_bus(page_address(q_vector->adapter->rx_ring[0]->rx_buffer_info[0].page)) & 0xFFFFFFFF); writeq(0x0000000000000001, q_vector->adapter->hw_region2.hw_addr+ACC_PCI_IPCONT_DATA_OFFSET); //These are supposed to be iowrite64 but it seems iowrite64 is different in redhat and only supports the copy function (to,from,size). yay redhat think different. writel(q_vector->adapter->rx_ring[0]->dma >> 32, q_vector->adapter->hw_region2.hw_addr+0x08+ACC_PCI_IPCONT_DATA_OFFSET); writel(q_vector->adapter->rx_ring[0]->dma & 0xFFFFFFFF, q_vector->adapter->hw_region2.hw_addr+0x0C+ACC_PCI_IPCONT_DATA_OFFSET); writel(virt_to_bus(page_address(q_vector->adapter->rx_ring[0]->rx_buffer_info[0].page)) >> 32, q_vector->adapter->hw_region2.hw_addr+0x10+ACC_PCI_IPCONT_DATA_OFFSET); writel(virt_to_bus(page_address(q_vector->adapter->rx_ring[0]->rx_buffer_info[0].page)) & 0xFFFFFFFF, q_vector->adapter->hw_region2.hw_addr+0x14+ACC_PCI_IPCONT_DATA_OFFSET); writeq(0xFF00000000000000, q_vector->adapter->hw_region2.hw_addr+0x18+ACC_PCI_IPCONT_DATA_OFFSET); writeq(0x0000000CC0000000, q_vector->adapter->hw_region2.hw_addr+0x20+ACC_PCI_IPCONT_DATA_OFFSET); writeq(0x0000000CC0000000, q_vector->adapter->hw_region2.hw_addr+0x28+ACC_PCI_IPCONT_DATA_OFFSET); writeq(0x0003344000005500, q_vector->adapter->hw_region2.hw_addr+0x30+ACC_PCI_IPCONT_DATA_OFFSET); //Send the start command to the block writeq(0x0000000000000001, q_vector->adapter->hw_region2.hw_addr); acc_for_each_ring(ring, q_vector->tx) clean_complete &= !!acc_clean_tx_irq(q_vector, ring); if (q_vector->rx.count > 1) per_ring_budget = max(budget/q_vector->rx.count, 1); else per_ring_budget = budget; acc_for_each_ring(ring, q_vector->rx){ e_dev_info("Calling clean_rx_irq\n"); clean_complete &= acc_clean_rx_irq(q_vector, ring, per_ring_budget); } /* If all work not completed, return budget and keep polling */ if (!clean_complete) return budget; e_dev_info("Clean complete\n"); /* all work done, exit the polling mode */ napi_complete(napi); if (adapter->rx_itr_setting & 1) acc_set_itr(q_vector); if (!test_bit(__ACC_DOWN, &adapter->state)) acc_irq_enable_queues(adapter, ((u64)1 << q_vector->v_idx)); e_dev_info("Exiting acc_poll\n"); return 0; } static bool acc_clean_rx_irq(struct acc_q_vector *q_vector, struct acc_ring *rx_ring, const int budget) { printk(KERN_INFO "acc Entered clean_rx_irq\n"); unsigned int total_rx_bytes = 0, total_rx_packets = 0; u16 cleaned_count = acc_desc_unused(rx_ring); /// First pass this is count-1 because ntc and ntu are 0 so this is 512-1=511 printk(KERN_INFO "acc RX irq Clean count = %d\n", cleaned_count); do { union acc_adv_rx_desc *rx_desc; struct sk_buff *skb; /* return some buffers to hardware, one at a time is too slow */ if (cleaned_count >= ACC_RX_BUFFER_WRITE) { //When the clean count is >16 allocate some more buffers to get the clean count down. First pass this happens. acc_alloc_rx_buffers(rx_ring, cleaned_count); cleaned_count = 0; } rx_desc = ACC_RX_DESC(rx_ring, rx_ring->next_to_clean); printk(KERN_INFO "acc inside RX do while, acquired description\n"); printk(KERN_INFO "acc Everything I can about the rx_ring desc (acc_rx_buffer). status_error=%d\t \ length=%d\n", rx_desc->wb.upper.status_error, rx_desc->wb.upper.length); if (!acc_test_staterr(rx_desc, ACC_RXD_STAT_DD)) break; printk(KERN_INFO "acc inside RX past status_error check\n"); /* * This memory barrier is needed to keep us from reading * any other fields out of the rx_desc until we know the * RXD_STAT_DD bit is set */ rmb(); /* retrieve a buffer from the ring */ skb = acc_fetch_rx_buffer(rx_ring, rx_desc); /* exit if we failed to retrieve a buffer */ if (!skb) break; printk(KERN_INFO "acc successfully retrieved a buffer\n"); cleaned_count++; /* place incomplete frames back on ring for completion */ if (acc_is_non_eop(rx_ring, rx_desc, skb)) continue; /* verify the packet layout is correct */ if (acc_cleanup_headers(rx_ring, rx_desc, skb)) continue; /* probably a little skewed due to removing CRC */ total_rx_bytes += skb->len; /* populate checksum, timestamp, VLAN, and protocol */ acc_process_skb_fields(rx_ring, rx_desc, skb); acc_rx_skb(q_vector, skb); ///I believe this sends data to the kernel network stuff and then the generic OS /* update budget accounting */ total_rx_packets++; } while (likely(total_rx_packets < budget)); printk(KERN_INFO "acc rx irq exited the while loop\n"); u64_stats_update_begin(&rx_ring->syncp); rx_ring->stats.packets += total_rx_packets; rx_ring->stats.bytes += total_rx_bytes; u64_stats_update_end(&rx_ring->syncp); q_vector->rx.total_packets += total_rx_packets; q_vector->rx.total_bytes += total_rx_bytes; if (cleaned_count) acc_alloc_rx_buffers(rx_ring, cleaned_count); printk(KERN_INFO "acc rx irq returning happily\n"); return (total_rx_packets < budget); } static struct sk_buff *acc_fetch_rx_buffer(struct acc_ring *rx_ring, union acc_adv_rx_desc *rx_desc) { struct acc_rx_buffer *rx_buffer; struct sk_buff *skb; struct page *page; printk(KERN_INFO "acc Attempting to fetch rx buffer\n"); rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; page = rx_buffer->page; //This page is set by I think acc_add_rx_frag... hard to tell. yes the page is created there and kind of linked to the dma via dma_map_page prefetchw(page); ///Prefetch the page cacheline for writing skb = rx_buffer->skb; ///This does the mapping between skb and dma page table I believe. if (likely(!skb)) { printk(KERN_INFO "acc attempting to allocate netdrv space for page.\n"); void *page_addr = page_address(page) + //get the virtual page address of this page. rx_buffer->page_offset; /* prefetch first cache line of first page */ prefetch(page_addr); #if L1_CACHE_BYTES < 128 prefetch(page_addr + L1_CACHE_BYTES); #endif /* allocate a skb to store the frags */ skb = netdev_alloc_skb_ip_align(rx_ring->netdev, ACC_RX_HDR_SIZE); if (unlikely(!skb)) { rx_ring->rx_stats.alloc_rx_buff_failed++; return NULL; } /* * we will be copying header into skb->data in * pskb_may_pull so it is in our interest to prefetch * it now to avoid a possible cache miss */ prefetchw(skb->data); /* * Delay unmapping of the first packet. It carries the * header information, HW may still access the header * after the writeback. Only unmap it when EOP is * reached */ if (likely((rx_desc, ACC_RXD_STAT_EOP))) goto dma_sync; ACC_CB(skb)->dma = rx_buffer->dma; } else { if (acc_test_staterr(rx_desc, ACC_RXD_STAT_EOP)) acc_dma_sync_frag(rx_ring, skb); dma_sync: /* we are reusing so sync this buffer for CPU use */ printk(KERN_INFO "acc attempting to sync the dma and the device.\n"); dma_sync_single_range_for_cpu(rx_ring->dev, //Sync to the pci device, this dma buffer, at this page offset, this ring, for device to DMA transfer rx_buffer->dma, rx_buffer->page_offset, acc_rx_bufsz(rx_ring), DMA_FROM_DEVICE); } /* pull page into skb */ if (acc_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) { //This is again temporary to try and create blockers around the problem. return skb; /* hand second half of page back to the ring */ acc_reuse_rx_page(rx_ring, rx_buffer); } else if (ACC_CB(skb)->dma == rx_buffer->dma) { /* the page has been released from the ring */ ACC_CB(skb)->page_released = true; } else { /* we are not reusing the buffer so unmap it */ dma_unmap_page(rx_ring->dev, rx_buffer->dma, acc_rx_pg_size(rx_ring), DMA_FROM_DEVICE); } /* clear contents of buffer_info */ rx_buffer->skb = NULL; rx_buffer->dma = 0; rx_buffer->page = NULL; printk(KERN_INFO "acc returning from fetch_rx_buffer.\n"); return skb; } static bool acc_add_rx_frag(struct acc_ring *rx_ring, struct acc_rx_buffer *rx_buffer, union acc_adv_rx_desc *rx_desc, struct sk_buff *skb) { printk(KERN_INFO "acc Attempting to add rx_frag from page.\n"); struct page *page = rx_buffer->page; unsigned int size = le16_to_cpu(rx_desc->wb.upper.length); #if (PAGE_SIZE < 8192) unsigned int truesize = acc_rx_bufsz(rx_ring); #else unsigned int truesize = ALIGN(size, L1_CACHE_BYTES); unsigned int last_offset = acc_rx_pg_size(rx_ring) - acc_rx_bufsz(rx_ring); #endif if ((size <= ACC_RX_HDR_SIZE) && !skb_is_nonlinear(skb)) { printk(KERN_INFO "acc Inside the size check.\n"); unsigned char *va = page_address(page) + rx_buffer->page_offset; printk(KERN_INFO "page:%p\tpage_address:%p\tpage_offset:%d\n",page,page_address(page),rx_buffer->page_offset); printk(KERN_INFO "acc First 4 bytes of string:%x %x %x %x\n",va[0],va[1],va[2],va[3]); //FIXME: I can now read this page table but there is still no meaningful data in it. (appear to be reading garbage) printk(KERN_INFO "acc 32 bytes in:%x %x %x %x\n",va[32],va[33],va[34],va[35]); return true; memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long))); /* we can reuse buffer as-is, just make sure it is local */ if (likely(page_to_nid(page) == numa_node_id())) return true; /* this page cannot be reused so discard it */ put_page(page); return false; } skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, rx_buffer->page_offset, size, truesize); /* avoid re-using remote pages */ if (unlikely(page_to_nid(page) != numa_node_id())) return false; #if (PAGE_SIZE < 8192) /* if we are only owner of page we can reuse it */ if (unlikely(page_count(page) != 1)) return false; /* flip page offset to other buffer */ rx_buffer->page_offset ^= truesize; /* * since we are the only owner of the page and we need to * increment it, just set the value to 2 in order to avoid * an unecessary locked operation */ atomic_set(&page->_count, 2); #else /* move offset up to the next cache line */ rx_buffer->page_offset += truesize; if (rx_buffer->page_offset > last_offset) return false; /* bump ref count on page before it is given to the stack */ get_page(page); #endif return true; }
c, linux-kernel, kernel, redhat, vivado
2
936
0
https://stackoverflow.com/questions/55280211/writing-to-page-mapped-dmas-in-kernel
55,095,072
Openshift 3.11 logging to external ElasticSearch instance
I have an external ElasticSearch instance that I'd like Fluentd and Kibana to leverage accordingly in OSE 3.11. The ES instance is insecure at the moment, as this is simply a internal pilot. Based on the OSE docs here ( [URL] ), I should be able to update a number of ES_* variables accordingly in the ElasticSearch deployment config. The first issue is, the variables referenced in the docs don't exist in the ElasticSearch deployment config. Secondly, I tried updating these values via the inventory file. For example, for the property openshift_logging_es_host , the description claims: The name of the Elasticsearch service where Fluentd should send logs. These were the values in my inventory file: openshift_logging_install_logging=true openshift_logging_es_ops_nodeselector={'node-role.kubernetes.io/infra':'true'} openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra':'true'} openshift_logging_es_host='169.xx.xxx.xx' openshift_logging_es_port='9200' openshift_logging_es_ops_host='169.xx.xxx.xx' openshift_logging_es_ops_port='9200' openshift_logging_kibana_env_vars={'ELASTICSEARCH_URL':'[URL] openshift_logging_es_ca=none openshift_logging_es_client_cert=none openshift_logging_es_client_key=none openshift_logging_es_ops_ca=none openshift_logging_es_ops_client_cert=none openshift_logging_es_ops_client_key=none The only variable above that seems to stick after uninstall/install of logging is openshift_logging_kibana_env_vars. I'm not sure why the others weren't respected - perhaps I'm missing one that triggers use of these vars. In any case, after those attempts failed, I eventually found the values set on the logging-fluentd Daemon Set. I can edit via CLI or the console to set the es host, port, client keys, certs, etc. I also set the ops equivalents. The fluentd logs confirms these values are set, however, it's attempting to use https in conjunction with the default fluentd/changeme id/pwd combo. 2019-03-08 11:49:00 -0600 [warn]: temporarily failed to flush the buffer. next_retry=2019-03-08 11:54:00 -0600 error_class="Fluent::ElasticsearchOutput::ConnectionFailure" error="Can not reach Elasticsearch cluster ({:host=>\"169.xx.xxx.xx\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"})!" plugin_id="elasticsearch-apps" So, ideally, I'd like to set these as inventory variables, and everything just works. If anybody has a suggestion to fix that issue, please let me know. Less than ideal, I can modify the ES deployment config or the Fluentd Dameon Set post-install and set the values required, assuming someone knows how to avoid https? Thanks for any input you might have. Update: I managed to get this working, but not via the properties documented or the provided suggestion. I ended up going through the various playbooks to identify the vars being used. I also had to setup mutual TLS, as when I specified the cert files locations to be none/undefined, the logs indicated a 'File not found'. Essentially, none or undefined gets translated to "", which it tries to open as a file. So, this was the magic combination of properties that will get you 99.9% of the way. openshift_logging_es_host=169.xx.xxx.xxx openshift_logging_fluentd_app_host=169.xx.xxx.xxx openshift_logging_fluentd_ops_host=169.xx.xxx.xxx openshift_logging_fluentd_ca_path='/tmp/keys/client-ca.cer' openshift_logging_fluentd_key_path='/tmp/keys/client.key' openshift_logging_fluentd_cert_path='/tmp/keys/client.cer' openshift_logging_fluentd_ops_ca_path='/tmp/keys/client-ca.cer' openshift_logging_fluentd_ops_key_path='/tmp/keys/client.key' openshift_logging_fluentd_ops_cert_path='/tmp/keys/client.cer' Notes: You need to copy the keys to /tmp/keys prior. Upon completion, you will notice that OPS_HOST will not be set on the Daemon Set. I left it in the properties above as I think it's just a bug, and perhaps will be fixed beyond 3.11 which is what I'm using. To adjust this simply oc edit ds/logging-fluentd and modify accordingly. With these changes, the log data gets sent to my external ES instance.
Openshift 3.11 logging to external ElasticSearch instance I have an external ElasticSearch instance that I'd like Fluentd and Kibana to leverage accordingly in OSE 3.11. The ES instance is insecure at the moment, as this is simply a internal pilot. Based on the OSE docs here ( [URL] ), I should be able to update a number of ES_* variables accordingly in the ElasticSearch deployment config. The first issue is, the variables referenced in the docs don't exist in the ElasticSearch deployment config. Secondly, I tried updating these values via the inventory file. For example, for the property openshift_logging_es_host , the description claims: The name of the Elasticsearch service where Fluentd should send logs. These were the values in my inventory file: openshift_logging_install_logging=true openshift_logging_es_ops_nodeselector={'node-role.kubernetes.io/infra':'true'} openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra':'true'} openshift_logging_es_host='169.xx.xxx.xx' openshift_logging_es_port='9200' openshift_logging_es_ops_host='169.xx.xxx.xx' openshift_logging_es_ops_port='9200' openshift_logging_kibana_env_vars={'ELASTICSEARCH_URL':'[URL] openshift_logging_es_ca=none openshift_logging_es_client_cert=none openshift_logging_es_client_key=none openshift_logging_es_ops_ca=none openshift_logging_es_ops_client_cert=none openshift_logging_es_ops_client_key=none The only variable above that seems to stick after uninstall/install of logging is openshift_logging_kibana_env_vars. I'm not sure why the others weren't respected - perhaps I'm missing one that triggers use of these vars. In any case, after those attempts failed, I eventually found the values set on the logging-fluentd Daemon Set. I can edit via CLI or the console to set the es host, port, client keys, certs, etc. I also set the ops equivalents. The fluentd logs confirms these values are set, however, it's attempting to use https in conjunction with the default fluentd/changeme id/pwd combo. 2019-03-08 11:49:00 -0600 [warn]: temporarily failed to flush the buffer. next_retry=2019-03-08 11:54:00 -0600 error_class="Fluent::ElasticsearchOutput::ConnectionFailure" error="Can not reach Elasticsearch cluster ({:host=>\"169.xx.xxx.xx\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"})!" plugin_id="elasticsearch-apps" So, ideally, I'd like to set these as inventory variables, and everything just works. If anybody has a suggestion to fix that issue, please let me know. Less than ideal, I can modify the ES deployment config or the Fluentd Dameon Set post-install and set the values required, assuming someone knows how to avoid https? Thanks for any input you might have. Update: I managed to get this working, but not via the properties documented or the provided suggestion. I ended up going through the various playbooks to identify the vars being used. I also had to setup mutual TLS, as when I specified the cert files locations to be none/undefined, the logs indicated a 'File not found'. Essentially, none or undefined gets translated to "", which it tries to open as a file. So, this was the magic combination of properties that will get you 99.9% of the way. openshift_logging_es_host=169.xx.xxx.xxx openshift_logging_fluentd_app_host=169.xx.xxx.xxx openshift_logging_fluentd_ops_host=169.xx.xxx.xxx openshift_logging_fluentd_ca_path='/tmp/keys/client-ca.cer' openshift_logging_fluentd_key_path='/tmp/keys/client.key' openshift_logging_fluentd_cert_path='/tmp/keys/client.cer' openshift_logging_fluentd_ops_ca_path='/tmp/keys/client-ca.cer' openshift_logging_fluentd_ops_key_path='/tmp/keys/client.key' openshift_logging_fluentd_ops_cert_path='/tmp/keys/client.cer' Notes: You need to copy the keys to /tmp/keys prior. Upon completion, you will notice that OPS_HOST will not be set on the Daemon Set. I left it in the properties above as I think it's just a bug, and perhaps will be fixed beyond 3.11 which is what I'm using. To adjust this simply oc edit ds/logging-fluentd and modify accordingly. With these changes, the log data gets sent to my external ES instance.
ansible, openshift, redhat, ansible-inventory, openshift-enterprise
2
2,618
1
https://stackoverflow.com/questions/55095072/openshift-3-11-logging-to-external-elasticsearch-instance
55,075,683
Why Docker images from Red Hat Container Catalog are not documented?
I am using keycloak image (jboss/keycloak) from DockerHub, for development environment, but I'am planning to go to Production with Red Hat SSO image from Red Hat Container Catalog (RHCC), for OpenShift. DockerHub's keycloak image is well documented (environment variables to configure Database, initial user, volume configuration...), but RHCC's isn't. Maybe I am missing something related to RH Docker Images... Do you have experience with images from RHCC? Are these images though only for extension?
Why Docker images from Red Hat Container Catalog are not documented? I am using keycloak image (jboss/keycloak) from DockerHub, for development environment, but I'am planning to go to Production with Red Hat SSO image from Red Hat Container Catalog (RHCC), for OpenShift. DockerHub's keycloak image is well documented (environment variables to configure Database, initial user, volume configuration...), but RHCC's isn't. Maybe I am missing something related to RH Docker Images... Do you have experience with images from RHCC? Are these images though only for extension?
docker, openshift, redhat
2
1,050
0
https://stackoverflow.com/questions/55075683/why-docker-images-from-red-hat-container-catalog-are-not-documented
54,805,906
My Nvidia drivers are not used for OpenGL rendering?
I am trying to execute a c++ code which is using OpenGL(4.x is needed) on Red-hat linux OS. it is throwing the below error. X Error of failed request: GLXBadFBConfig Hardware info: OS: Linux POWER LE RHEL 7 Accessing the screen through VNC Viewer. below are the lspci outputs: lspci | grep VGA 0002:02:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41) lspci | grep 3D 0004:04:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2] (rev a1) 0035:03:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2] (rev a1) I thought my machine opengl's version is not matching with required version of the program. so I checked the glxinfo and below is the output glxinfo name of display: :3 display: :3 screen: 0 direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose) server glx vendor string: SGI server glx version string: 1.4 server glx extensions: GLX_ARB_create_context, GLX_ARB_create_context_profile, GLX_ARB_fbconfig_float, GLX_ARB_framebuffer_sRGB, GLX_ARB_multisample, GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_libglvnd, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_copy_sub_buffer, GLX_OML_swap_method, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_SGI_make_current_read client glx vendor string: NVIDIA Corporation client glx version string: 1.4 client glx extensions: GLX_ARB_context_flush_control, GLX_ARB_create_context, GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile, GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float, GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_buffer_age, GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_stereo_tree, GLX_EXT_swap_control, GLX_EXT_swap_control_tear, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_NV_copy_buffer, GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer, GLX_NV_multisample_coverage, GLX_NV_present_video, GLX_NV_robustness_video_memory_purge, GLX_NV_swap_group, GLX_NV_video_capture, GLX_NV_video_out, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control, GLX_SGI_video_sync GLX version: 1.4 GLX extensions: GLX_ARB_create_context, GLX_ARB_create_context_profile, GLX_ARB_fbconfig_float, GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer OpenGL vendor string: VMware, Inc. OpenGL renderer string: llvmpipe (LLVM 5.0, 128 bits) OpenGL version string: 1.4 (2.1 Mesa 17.2.3) OpenGL extensions: GL_ARB_depth_texture, GL_ARB_draw_buffers, GL_ARB_fragment_program, GL_ARB_fragment_program_shadow, GL_ARB_multisample, GL_ARB_multitexture, GL_ARB_occlusion_query, GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_shadow, GL_ARB_texture_border_clamp, GL_ARB_texture_compression, GL_ARB_texture_cube_map, GL_ARB_texture_env_add, GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat, GL_ARB_texture_non_power_of_two, GL_ARB_transpose_matrix, GL_ARB_vertex_program, GL_ARB_window_pos, GL_ATI_draw_buffers, GL_ATI_texture_mirror_once, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color, GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate, GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_framebuffer_object, GL_EXT_multi_draw_arrays, GL_EXT_packed_pixels, GL_EXT_point_parameters, GL_EXT_rescale_normal, GL_EXT_secondary_color, GL_EXT_separate_specular_color, GL_EXT_shadow_funcs, GL_EXT_stencil_two_side, GL_EXT_stencil_wrap, GL_EXT_texture3D, GL_EXT_texture_compression_dxt1, GL_EXT_texture_compression_s3tc, GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add, GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3, GL_EXT_texture_lod_bias, GL_EXT_texture_mirror_clamp, GL_EXT_texture_object, GL_EXT_vertex_array, GL_IBM_texture_mirrored_repeat, GL_NV_blend_square, GL_NV_depth_clamp, GL_NV_fog_distance, GL_NV_light_max_exponent, GL_NV_texgen_reflection, GL_NV_texture_env_combine4, GL_NV_texture_rectangle, GL_SGIS_generate_mipmap, GL_SGIS_texture_lod Here it is showing the version 1.4. and the OpenGL vendor string is not Nvidia. and direct rendering is NO. seems it is using cpu for rendering. How can I make my nvidia drivers to be used for gl rendering? And if nvidia drivers are used then my OpenGL version will increase?
My Nvidia drivers are not used for OpenGL rendering? I am trying to execute a c++ code which is using OpenGL(4.x is needed) on Red-hat linux OS. it is throwing the below error. X Error of failed request: GLXBadFBConfig Hardware info: OS: Linux POWER LE RHEL 7 Accessing the screen through VNC Viewer. below are the lspci outputs: lspci | grep VGA 0002:02:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41) lspci | grep 3D 0004:04:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2] (rev a1) 0035:03:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2] (rev a1) I thought my machine opengl's version is not matching with required version of the program. so I checked the glxinfo and below is the output glxinfo name of display: :3 display: :3 screen: 0 direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose) server glx vendor string: SGI server glx version string: 1.4 server glx extensions: GLX_ARB_create_context, GLX_ARB_create_context_profile, GLX_ARB_fbconfig_float, GLX_ARB_framebuffer_sRGB, GLX_ARB_multisample, GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_libglvnd, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_copy_sub_buffer, GLX_OML_swap_method, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_SGI_make_current_read client glx vendor string: NVIDIA Corporation client glx version string: 1.4 client glx extensions: GLX_ARB_context_flush_control, GLX_ARB_create_context, GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile, GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float, GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_buffer_age, GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_stereo_tree, GLX_EXT_swap_control, GLX_EXT_swap_control_tear, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_NV_copy_buffer, GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer, GLX_NV_multisample_coverage, GLX_NV_present_video, GLX_NV_robustness_video_memory_purge, GLX_NV_swap_group, GLX_NV_video_capture, GLX_NV_video_out, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control, GLX_SGI_video_sync GLX version: 1.4 GLX extensions: GLX_ARB_create_context, GLX_ARB_create_context_profile, GLX_ARB_fbconfig_float, GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer OpenGL vendor string: VMware, Inc. OpenGL renderer string: llvmpipe (LLVM 5.0, 128 bits) OpenGL version string: 1.4 (2.1 Mesa 17.2.3) OpenGL extensions: GL_ARB_depth_texture, GL_ARB_draw_buffers, GL_ARB_fragment_program, GL_ARB_fragment_program_shadow, GL_ARB_multisample, GL_ARB_multitexture, GL_ARB_occlusion_query, GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_shadow, GL_ARB_texture_border_clamp, GL_ARB_texture_compression, GL_ARB_texture_cube_map, GL_ARB_texture_env_add, GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat, GL_ARB_texture_non_power_of_two, GL_ARB_transpose_matrix, GL_ARB_vertex_program, GL_ARB_window_pos, GL_ATI_draw_buffers, GL_ATI_texture_mirror_once, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color, GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate, GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_framebuffer_object, GL_EXT_multi_draw_arrays, GL_EXT_packed_pixels, GL_EXT_point_parameters, GL_EXT_rescale_normal, GL_EXT_secondary_color, GL_EXT_separate_specular_color, GL_EXT_shadow_funcs, GL_EXT_stencil_two_side, GL_EXT_stencil_wrap, GL_EXT_texture3D, GL_EXT_texture_compression_dxt1, GL_EXT_texture_compression_s3tc, GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add, GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3, GL_EXT_texture_lod_bias, GL_EXT_texture_mirror_clamp, GL_EXT_texture_object, GL_EXT_vertex_array, GL_IBM_texture_mirrored_repeat, GL_NV_blend_square, GL_NV_depth_clamp, GL_NV_fog_distance, GL_NV_light_max_exponent, GL_NV_texgen_reflection, GL_NV_texture_env_combine4, GL_NV_texture_rectangle, GL_SGIS_generate_mipmap, GL_SGIS_texture_lod Here it is showing the version 1.4. and the OpenGL vendor string is not Nvidia. and direct rendering is NO. seems it is using cpu for rendering. How can I make my nvidia drivers to be used for gl rendering? And if nvidia drivers are used then my OpenGL version will increase?
linux, opengl, redhat, nvidia, glx
2
932
0
https://stackoverflow.com/questions/54805906/my-nvidia-drivers-are-not-used-for-opengl-rendering
54,803,140
RedHat + MySQL + Tomcat: ERROR 1040: Too many connections (but everything is fine on Ubuntu)
We have an application (Java, Spring Boot, Hikari CP). Environment: Ubuntu + MySQL 5.7.25. Everything works fine. Now we are trying to install it on RedHat (MySQL 5.7.25). The application is running. But when we are trying to log in and the application is trying to connect to DB, it is getting ERROR 1040: Too many connections. After that, we can't connect to MySQL even using the command line (it responds with ERROR 1040). I'm not sure that increase of the number of connections will be a good solution because in the Ubuntu we are using a default value (151) and everything works fine. Any thoughts?
RedHat + MySQL + Tomcat: ERROR 1040: Too many connections (but everything is fine on Ubuntu) We have an application (Java, Spring Boot, Hikari CP). Environment: Ubuntu + MySQL 5.7.25. Everything works fine. Now we are trying to install it on RedHat (MySQL 5.7.25). The application is running. But when we are trying to log in and the application is trying to connect to DB, it is getting ERROR 1040: Too many connections. After that, we can't connect to MySQL even using the command line (it responds with ERROR 1040). I'm not sure that increase of the number of connections will be a good solution because in the Ubuntu we are using a default value (151) and everything works fine. Any thoughts?
java, mysql, tomcat, redhat, hikaricp
2
227
1
https://stackoverflow.com/questions/54803140/redhat-mysql-tomcat-error-1040-too-many-connections-but-everything-is-fin
54,656,010
Python 2.7 [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618)&gt;
I am porting a django project over from RHEL5 to RHEL7 and python 2.5 to 2.7.5 and am having certificate problems. The bit of code I am troubleshooting is a suds Client invocation of a web service WSDL client = Client(_LDAP_URLS[env]) where LDAP_URLS is already defined in the code. I imported it using from suds.client import Client I think this may be more of a Linux and Python interaction problem between the two versions rather than an issue with the code, but I could be wrong. Here is the full code. (this is django by the way, so this is a view.py file) from django.conf import settings from django.core.urlresolvers import reverse from django.http import HttpResponseRedirect, HttpResponse from django.shortcuts import render_to_response from suds.client import Client from suds.wsse import Security import suds from gaic.security.sso import BinarySecurityToken from ud_data_extract import UDDataExtractForm _LDAP_URLS = {WSDL URLS HARD CODED HERE} def _get_person(env='production', hid=None, vid=None, token=None, group=None): if env not in _LDAP_URLS: env = 'production' if token: client = Client(_SSO_URLS[env]) try: person = client.service.getPersonFromToken(token) hid = person['hid'] except Exception: return None try: client = Client(_LDAP_URLS[env]) except Exception as e: log.error("line 165: %s", e) if group: grp = client.factory.create('groupDto') grp.name = group users = client.service.getGroupMembers(grp) groups = [] try: group_ = client.service.getGroup(grp) gnamere = re.compile(r'cn=([^,]+),') for gname in group_.uniqueMembers: m = gnamere.match(gname) if m: group_name = m.groups(1)[0] groups.append(group_name) groups.sort() except Exception, e: pass # groups = [str(e)] return [users, groups] person = client.factory.create('personDto') if hid: person.hid = hid if vid: person.vid = vid user = None The issue in my logging points to around line 165, I took out some code with our company wsdl urls so it may be in the 150s. It's in a try statement. try: client = Client(_LDAP_URLS[env]) except Exception as e: log.error("line 165: %s", e) I have looked around and this page said that it may be a problem with newer version of python and pointed to this redhat documentation to fix it, but I really don't know what to do with it. Thanks in advance for the help.
Python 2.7 [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618)&gt; I am porting a django project over from RHEL5 to RHEL7 and python 2.5 to 2.7.5 and am having certificate problems. The bit of code I am troubleshooting is a suds Client invocation of a web service WSDL client = Client(_LDAP_URLS[env]) where LDAP_URLS is already defined in the code. I imported it using from suds.client import Client I think this may be more of a Linux and Python interaction problem between the two versions rather than an issue with the code, but I could be wrong. Here is the full code. (this is django by the way, so this is a view.py file) from django.conf import settings from django.core.urlresolvers import reverse from django.http import HttpResponseRedirect, HttpResponse from django.shortcuts import render_to_response from suds.client import Client from suds.wsse import Security import suds from gaic.security.sso import BinarySecurityToken from ud_data_extract import UDDataExtractForm _LDAP_URLS = {WSDL URLS HARD CODED HERE} def _get_person(env='production', hid=None, vid=None, token=None, group=None): if env not in _LDAP_URLS: env = 'production' if token: client = Client(_SSO_URLS[env]) try: person = client.service.getPersonFromToken(token) hid = person['hid'] except Exception: return None try: client = Client(_LDAP_URLS[env]) except Exception as e: log.error("line 165: %s", e) if group: grp = client.factory.create('groupDto') grp.name = group users = client.service.getGroupMembers(grp) groups = [] try: group_ = client.service.getGroup(grp) gnamere = re.compile(r'cn=([^,]+),') for gname in group_.uniqueMembers: m = gnamere.match(gname) if m: group_name = m.groups(1)[0] groups.append(group_name) groups.sort() except Exception, e: pass # groups = [str(e)] return [users, groups] person = client.factory.create('personDto') if hid: person.hid = hid if vid: person.vid = vid user = None The issue in my logging points to around line 165, I took out some code with our company wsdl urls so it may be in the 150s. It's in a try statement. try: client = Client(_LDAP_URLS[env]) except Exception as e: log.error("line 165: %s", e) I have looked around and this page said that it may be a problem with newer version of python and pointed to this redhat documentation to fix it, but I really don't know what to do with it. Thanks in advance for the help.
python, django, ssl, ssl-certificate, redhat
2
7,982
0
https://stackoverflow.com/questions/54656010/python-2-7-ssl-certificate-verify-failed-certificate-verify-failed-ssl-c61
51,029,812
Red Hat AMQ 7.1 - ActiveMQSecurityException throws during creation of a MessageConsumer
I'm configuring an AMQ broker for my Java application. Users and roles are defined in their respective configuration properties files. These users have specific permissions depending on the address they are trying to use. All of this is configured in the broker.xml. The broker uses 3 addresses: genericTopic, news.europe.europeTopic, news.us.usTopic. For the genericTopic address, all users have all the permissions. Nevertheless, I'm getting this exception: An exception occured while executing the Java class. AMQ119213: User: bill does not have permission='CREATE_NON_DURABLE_QUEUE' for queue 576bc5ef-3373-409b-b45d-0b382107f915 on address genericTopic The broker.xml file contains: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <configuration xmlns="urn:activemq" xmlns:xsi="[URL] xsi:schemaLocation="urn:activemq /schema/artemis-server.xsd"> <core xmlns="urn:activemq:core"> <bindings-directory>./data/messaging/bindings</bindings-directory> <journal-directory>./data/messaging/journal</journal-directory> <large-messages-directory>./data/messaging/largemessages</large-messages-directory> <paging-directory>./data/messaging/paging</paging-directory> <!-- Acceptors --> <acceptors> <acceptor name="netty-acceptor">tcp://localhost:61616</acceptor> </acceptors> <!-- Other config --> <security-settings> <!-- any user can have full control of generic topics --> <security-setting match="#"> <permission roles="user" type="createDurableQueue"/> <permission roles="user" type="deleteDurableQueue"/> <permission roles="user" type="createNonDurableQueue"/> <permission roles="user" type="deleteNonDurableQueue"/> <permission roles="user" type="send"/> <permission roles="user" type="consume"/> </security-setting> <security-setting match="news.europe.#"> <permission roles="user" type="createDurableQueue"/> <permission roles="user" type="deleteDurableQueue"/> <permission roles="user" type="createNonDurableQueue"/> <permission roles="user" type="deleteNonDurableQueue"/> <permission roles="europe-user" type="send"/> <permission roles="news-user" type="consume"/> </security-setting> <security-setting match="news.us.#"> <permission roles="user" type="createDurableQueue"/> <permission roles="user" type="deleteDurableQueue"/> <permission roles="user" type="createNonDurableQueue"/> <permission roles="user" type="deleteNonDurableQueue"/> <permission roles="us-user" type="send"/> <permission roles="news-user" type="consume"/> </security-setting> <security-setting match="jms.tempqueue.#"> <permission roles="user" type="createDurableQueue"/> <permission roles="user" type="deleteDurableQueue"/> <permission roles="user" type="createNonDurableQueue"/> <permission roles="user" type="deleteNonDurableQueue"/> <permission roles="user" type="send"/> <permission roles="user" type="consume"/> </security-setting> </security-settings> <addresses> <address name="genericTopic"> <multicast/> </address> <address name="news.europe.europeTopic"> <multicast/> </address> <address name="news.us.usTopic"> <multicast/> </address> </addresses> </core> </configuration> artemis-users.properties bill = ENC(1024:020FEC8DB7EBBCB987FD25F1188EA71FA13FD4E0BF504963891EDC97E1ED1285:3E53D34A96F9995612C7C585CA04BA63CF5F531C92510E882960F848BFC3982AF47FCD40AB888F9AC10648CCEBA1DD52C0F0A312B2C90225D9A46DDC50198B3C) andrew = ENC(1024:3E09F4D16A6970F3C40E24784AFE64AFD66349174AB20B2609109646A8F0561F:F22063143058EBCF47A0ACA1C29DBCB82C4AF15E510F5C801B47928AEA1836D1480BFD0DFD0320BA567D1A32C98859C02350AE271DC530F29D7E16E910E251AD) frank = ENC(1024:49292EEC8AA19AB5390A0F0D67AA5A3978DE1AF0F561B641A1CE90B3C9637AAD:22A8F9A4B144B9CC173F3B1D5A2B09FE57642234534C2EB3A805DB7D5F7FEA398B58EB9380B8EA69B916B5CFA23BC7573E09A87A20C0DF1A35A1134270260BE4) sam = ENC(1024:39832F10D9734D7E6EECE16BCEAA5E2917D384B4CE482A2A4B3D3E7A550B0A5C:CCA47914C6DD64AE6B69FE977BB445CBCDEA50D458E7F42AA341FA84A11C302E2EAB072E57B41A636589C89246911A6A49424CBA4B629F4846826183E9AD9DA1) artemis-roles.properties user=bill,andrew,frank,sam europe-user=andrew news-user=frank,sam us-user=frank In Java, the user bill can authenticate with supplied password, I can create producers for genericTopic with user bill , but not a MessageConsumer . This is the line of Java code that causes the exception: MessageConsumer consumer = session.createConsumer(topic); Here are some additonal logs in the AMQ broker: 2018-06-25 16:47:26,264 WARN [org.apache.activemq.artemis.core.server] AMQ222107: Cleared up resources for session 590f0d6e-78c1-11e8-a8e1-e82aea578992 2018-06-25 16:48:44,412 WARN [org.apache.activemq.artemis.core.server] AMQ222061: Client connection failed, clearing up resources for session 87928e3c-78c1-11e8-bcaa-e82aea578992 UPDATE: I solved some part of the problem. All my passwords were incorrect. Now there are no excepcionts but the message consumer blocks and waits forever for a message that exists (checked that on the web console) but for some reason It cannot receive. Also, I'm still getting the same warnings about client connection failed. More specifically, the application stops here: TextMessage receivedMsg = (TextMessage) consumer.receive();
Red Hat AMQ 7.1 - ActiveMQSecurityException throws during creation of a MessageConsumer I'm configuring an AMQ broker for my Java application. Users and roles are defined in their respective configuration properties files. These users have specific permissions depending on the address they are trying to use. All of this is configured in the broker.xml. The broker uses 3 addresses: genericTopic, news.europe.europeTopic, news.us.usTopic. For the genericTopic address, all users have all the permissions. Nevertheless, I'm getting this exception: An exception occured while executing the Java class. AMQ119213: User: bill does not have permission='CREATE_NON_DURABLE_QUEUE' for queue 576bc5ef-3373-409b-b45d-0b382107f915 on address genericTopic The broker.xml file contains: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <configuration xmlns="urn:activemq" xmlns:xsi="[URL] xsi:schemaLocation="urn:activemq /schema/artemis-server.xsd"> <core xmlns="urn:activemq:core"> <bindings-directory>./data/messaging/bindings</bindings-directory> <journal-directory>./data/messaging/journal</journal-directory> <large-messages-directory>./data/messaging/largemessages</large-messages-directory> <paging-directory>./data/messaging/paging</paging-directory> <!-- Acceptors --> <acceptors> <acceptor name="netty-acceptor">tcp://localhost:61616</acceptor> </acceptors> <!-- Other config --> <security-settings> <!-- any user can have full control of generic topics --> <security-setting match="#"> <permission roles="user" type="createDurableQueue"/> <permission roles="user" type="deleteDurableQueue"/> <permission roles="user" type="createNonDurableQueue"/> <permission roles="user" type="deleteNonDurableQueue"/> <permission roles="user" type="send"/> <permission roles="user" type="consume"/> </security-setting> <security-setting match="news.europe.#"> <permission roles="user" type="createDurableQueue"/> <permission roles="user" type="deleteDurableQueue"/> <permission roles="user" type="createNonDurableQueue"/> <permission roles="user" type="deleteNonDurableQueue"/> <permission roles="europe-user" type="send"/> <permission roles="news-user" type="consume"/> </security-setting> <security-setting match="news.us.#"> <permission roles="user" type="createDurableQueue"/> <permission roles="user" type="deleteDurableQueue"/> <permission roles="user" type="createNonDurableQueue"/> <permission roles="user" type="deleteNonDurableQueue"/> <permission roles="us-user" type="send"/> <permission roles="news-user" type="consume"/> </security-setting> <security-setting match="jms.tempqueue.#"> <permission roles="user" type="createDurableQueue"/> <permission roles="user" type="deleteDurableQueue"/> <permission roles="user" type="createNonDurableQueue"/> <permission roles="user" type="deleteNonDurableQueue"/> <permission roles="user" type="send"/> <permission roles="user" type="consume"/> </security-setting> </security-settings> <addresses> <address name="genericTopic"> <multicast/> </address> <address name="news.europe.europeTopic"> <multicast/> </address> <address name="news.us.usTopic"> <multicast/> </address> </addresses> </core> </configuration> artemis-users.properties bill = ENC(1024:020FEC8DB7EBBCB987FD25F1188EA71FA13FD4E0BF504963891EDC97E1ED1285:3E53D34A96F9995612C7C585CA04BA63CF5F531C92510E882960F848BFC3982AF47FCD40AB888F9AC10648CCEBA1DD52C0F0A312B2C90225D9A46DDC50198B3C) andrew = ENC(1024:3E09F4D16A6970F3C40E24784AFE64AFD66349174AB20B2609109646A8F0561F:F22063143058EBCF47A0ACA1C29DBCB82C4AF15E510F5C801B47928AEA1836D1480BFD0DFD0320BA567D1A32C98859C02350AE271DC530F29D7E16E910E251AD) frank = ENC(1024:49292EEC8AA19AB5390A0F0D67AA5A3978DE1AF0F561B641A1CE90B3C9637AAD:22A8F9A4B144B9CC173F3B1D5A2B09FE57642234534C2EB3A805DB7D5F7FEA398B58EB9380B8EA69B916B5CFA23BC7573E09A87A20C0DF1A35A1134270260BE4) sam = ENC(1024:39832F10D9734D7E6EECE16BCEAA5E2917D384B4CE482A2A4B3D3E7A550B0A5C:CCA47914C6DD64AE6B69FE977BB445CBCDEA50D458E7F42AA341FA84A11C302E2EAB072E57B41A636589C89246911A6A49424CBA4B629F4846826183E9AD9DA1) artemis-roles.properties user=bill,andrew,frank,sam europe-user=andrew news-user=frank,sam us-user=frank In Java, the user bill can authenticate with supplied password, I can create producers for genericTopic with user bill , but not a MessageConsumer . This is the line of Java code that causes the exception: MessageConsumer consumer = session.createConsumer(topic); Here are some additonal logs in the AMQ broker: 2018-06-25 16:47:26,264 WARN [org.apache.activemq.artemis.core.server] AMQ222107: Cleared up resources for session 590f0d6e-78c1-11e8-a8e1-e82aea578992 2018-06-25 16:48:44,412 WARN [org.apache.activemq.artemis.core.server] AMQ222061: Client connection failed, clearing up resources for session 87928e3c-78c1-11e8-bcaa-e82aea578992 UPDATE: I solved some part of the problem. All my passwords were incorrect. Now there are no excepcionts but the message consumer blocks and waits forever for a message that exists (checked that on the web console) but for some reason It cannot receive. Also, I'm still getting the same warnings about client connection failed. More specifically, the application stops here: TextMessage receivedMsg = (TextMessage) consumer.receive();
java, redhat, activemq-artemis, amq
2
224
0
https://stackoverflow.com/questions/51029812/red-hat-amq-7-1-activemqsecurityexception-throws-during-creation-of-a-messagec
50,624,012
GLIBC 2.14 installation error: forced unwind support is required - RHEL 7.5
I have upgraded my RHEL OS from 6.7 to 7.5. After upgrading, I found some issues when trying to run yum . Below are the details. # yum repolist There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /lib64/libgcc_s.so.1) Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.6.6 (r266:84292, Aug 9 2016, 06:11:56) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)] If you cannot solve this problem yourself, please go to the yum faq at: [URL] After getting this error, I just installed python2.7 and GLIBC 2.14. But when I am trying to install GLIBC 2.14 from my current GLIBC version 2.12, it is throwing some error. Below are the steps that I am using to install GLIBC 2.14: tar xvfz glibc-2.14.tar.gz cd glibc-2.14 mkdir build cd build ../configure --prefix=/opt/glibc-2.14 make sudo make install export LD_LIBRARY_PATH=/opt/glibc-2.14/lib:$LD_LIBRARY_PATH In step5, I am getting error. Below are the details: # ../configure --prefix=/opt/glibc-2.14 checking for forced unwind support... no configure: error: forced unwind support is required I am unaware of this error "unwind support is required".Please let me know how to setup/install forced unwind in Redhat 7.5.
GLIBC 2.14 installation error: forced unwind support is required - RHEL 7.5 I have upgraded my RHEL OS from 6.7 to 7.5. After upgrading, I found some issues when trying to run yum . Below are the details. # yum repolist There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /lib64/libgcc_s.so.1) Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.6.6 (r266:84292, Aug 9 2016, 06:11:56) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)] If you cannot solve this problem yourself, please go to the yum faq at: [URL] After getting this error, I just installed python2.7 and GLIBC 2.14. But when I am trying to install GLIBC 2.14 from my current GLIBC version 2.12, it is throwing some error. Below are the steps that I am using to install GLIBC 2.14: tar xvfz glibc-2.14.tar.gz cd glibc-2.14 mkdir build cd build ../configure --prefix=/opt/glibc-2.14 make sudo make install export LD_LIBRARY_PATH=/opt/glibc-2.14/lib:$LD_LIBRARY_PATH In step5, I am getting error. Below are the details: # ../configure --prefix=/opt/glibc-2.14 checking for forced unwind support... no configure: error: forced unwind support is required I am unaware of this error "unwind support is required".Please let me know how to setup/install forced unwind in Redhat 7.5.
python, linux, amazon-ec2, redhat
2
997
1
https://stackoverflow.com/questions/50624012/glibc-2-14-installation-error-forced-unwind-support-is-required-rhel-7-5
50,548,504
Unable to find libisl.so.15
I ve been trying to install bazel on a Linux system - Red Hat Enterprise Linux 7. While installing, I run into this error : (pls see link below) As seen, it is unable to load libisl.so.15. But that shared file is indeed present on the system and it is a symbolic link. How can I make the bazel build step recognize the presence of that library file. I did search online forums for solutions but could not find any suitable one for this. Any help/suggestions would be greatly appreciated. Note: I do not have sudo permissions.
Unable to find libisl.so.15 I ve been trying to install bazel on a Linux system - Red Hat Enterprise Linux 7. While installing, I run into this error : (pls see link below) As seen, it is unable to load libisl.so.15. But that shared file is indeed present on the system and it is a symbolic link. How can I make the bazel build step recognize the presence of that library file. I did search online forums for solutions but could not find any suitable one for this. Any help/suggestions would be greatly appreciated. Note: I do not have sudo permissions.
linux, redhat, bazel
2
1,656
0
https://stackoverflow.com/questions/50548504/unable-to-find-libisl-so-15
50,514,737
PHP ARGON2I password_hash function does not work
I have created a PHP file which includes the following code: ... ... $password = password_hash($_GET['password'], PASSWORD_ARGON2I, ['memory_cost' => 2048, 'time_cost' => 4, 'threads' => 3]); ... ... I have tested it on XAMPP server on my personal machine, and it works fine. When I transferred the file to my AWS EC2 server, the password_hash function does not seem to work, I'm using PHP 7.2.5 on the AWS EC2 , the XAMPP as well. Additional information: my machine is Windows 10 and the EC2 is RedHat . I have been trying for a whole day to figure this out but no luck. What could be the problem? And how can I solve this?
PHP ARGON2I password_hash function does not work I have created a PHP file which includes the following code: ... ... $password = password_hash($_GET['password'], PASSWORD_ARGON2I, ['memory_cost' => 2048, 'time_cost' => 4, 'threads' => 3]); ... ... I have tested it on XAMPP server on my personal machine, and it works fine. When I transferred the file to my AWS EC2 server, the password_hash function does not seem to work, I'm using PHP 7.2.5 on the AWS EC2 , the XAMPP as well. Additional information: my machine is Windows 10 and the EC2 is RedHat . I have been trying for a whole day to figure this out but no luck. What could be the problem? And how can I solve this?
php, amazon-web-services, amazon-ec2, redhat
2
728
0
https://stackoverflow.com/questions/50514737/php-argon2i-password-hash-function-does-not-work
50,144,871
No module named yum
I tried the solution @ yum---no module named yum and "No module named yum" with Python 2.7 but didn't help,it sounds like the yum module is not a stock Python module and need to build yum against your Python 2.7 install,can anyone provide guidance on how to do this? machine details: [usernames@machine]$ cat /etc/*elease LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch Oracle Linux Server release 6.6 Red Hat Enterprise Linux Server release 6.6 (Santiago) Oracle Linux Server release 6.6 Error:- There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: No module named yum Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.7.12 (default, Aug 11 2016, 12:02:22) [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] If you cannot solve this problem yourself, please go to the yum faq at: [URL]
No module named yum I tried the solution @ yum---no module named yum and "No module named yum" with Python 2.7 but didn't help,it sounds like the yum module is not a stock Python module and need to build yum against your Python 2.7 install,can anyone provide guidance on how to do this? machine details: [usernames@machine]$ cat /etc/*elease LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch Oracle Linux Server release 6.6 Red Hat Enterprise Linux Server release 6.6 (Santiago) Oracle Linux Server release 6.6 Error:- There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: No module named yum Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.7.12 (default, Aug 11 2016, 12:02:22) [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] If you cannot solve this problem yourself, please go to the yum faq at: [URL]
python, linux, redhat, yum, rhel
2
8,811
3
https://stackoverflow.com/questions/50144871/no-module-named-yum
49,921,609
Install rgdal dependencies on local redhat
I have to install rgdal package on R (this is an other question I posted before about rgdal and this is a related question which it doesn't work for redhat), So I must install some dependencies before install rgdal . if you check the CRAN depo here you will notice that GDAL and PROJ.4 are ones of required packages to build rgdal from source. knowing that I'm in linux Os (Redhat 6) and My server is local (not connected to internet only some redhat repositories which don't contain all redhat packages). I downloaded those packages and I used yum install to install them: For example this is what I got when I would install gdal : Resolving Dependencies --> Running transaction check ---> Package gdal.x86_64 0:1.8.1-1.el6 will be installed --> Processing Dependency: libcfitsio.so.0()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdap.so.11()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdapclient.so.3()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdapserver.so.7()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libgeos_c.so.1()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libgeotiff.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libhdf5.so.6()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libnetcdf.so.6()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libodbc.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libodbcinst.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libogdi.so.3()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: librx.so.0()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libspatialite.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libxerces-c-3.0.so()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Running transaction check ---> Package gdal.x86_64 0:1.8.1-1.el6 will be installed --> Processing Dependency: libcfitsio.so.0()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdap.so.11()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdapclient.so.3()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdapserver.so.7()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libgeos_c.so.1()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libgeotiff.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libhdf5.so.6()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libnetcdf.so.6()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libogdi.so.3()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: librx.so.0()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libspatialite.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 ---> Package unixODBC.x86_64 0:2.2.14-14.el6 will be installed --> Processing Dependency: libltdl.so.7()(64bit) for package: unixODBC-2.2.14-14.el6.x86_64 ---> Package xerces-c.x86_64 0:3.0.1-20.el6 will be installed --> Running transaction check ---> Package gdal.x86_64 0:1.8.1-1.el6 will be installed --> Processing Dependency: libcfitsio.so.0()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdap.so.11()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdapclient.so.3()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdapserver.so.7()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libgeos_c.so.1()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libgeotiff.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libhdf5.so.6()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libnetcdf.so.6()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libogdi.so.3()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: librx.so.0()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libspatialite.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 ---> Package libtool-ltdl.x86_64 0:2.2.6-15.5.el6 will be installed --> Finished Dependency Resolution Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libgeotiff.so.2()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libnetcdf.so.6()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libgeos_c.so.1()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: librx.so.0()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libcfitsio.so.0()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libhdf5.so.6()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libspatialite.so.2()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libdap.so.11()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libogdi.so.3()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libdapserver.so.7()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libdapclient.so.3()(64bit) You could try using --skip-broken to work around the problem ** Found 1 pre-existing rpmdb problem(s), 'yum check' output follows: lgtoclnt-8.2.3.7-1.x86_64 has missing requires of libcap.so.1()(64bit) every package of those below dependencies needs a similar number of package in his turn. which means I need to install like a 100 of packages manully. I have been straggling with this problem like 3 days now and I don't know how to fix it
Install rgdal dependencies on local redhat I have to install rgdal package on R (this is an other question I posted before about rgdal and this is a related question which it doesn't work for redhat), So I must install some dependencies before install rgdal . if you check the CRAN depo here you will notice that GDAL and PROJ.4 are ones of required packages to build rgdal from source. knowing that I'm in linux Os (Redhat 6) and My server is local (not connected to internet only some redhat repositories which don't contain all redhat packages). I downloaded those packages and I used yum install to install them: For example this is what I got when I would install gdal : Resolving Dependencies --> Running transaction check ---> Package gdal.x86_64 0:1.8.1-1.el6 will be installed --> Processing Dependency: libcfitsio.so.0()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdap.so.11()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdapclient.so.3()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdapserver.so.7()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libgeos_c.so.1()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libgeotiff.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libhdf5.so.6()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libnetcdf.so.6()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libodbc.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libodbcinst.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libogdi.so.3()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: librx.so.0()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libspatialite.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libxerces-c-3.0.so()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Running transaction check ---> Package gdal.x86_64 0:1.8.1-1.el6 will be installed --> Processing Dependency: libcfitsio.so.0()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdap.so.11()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdapclient.so.3()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdapserver.so.7()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libgeos_c.so.1()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libgeotiff.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libhdf5.so.6()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libnetcdf.so.6()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libogdi.so.3()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: librx.so.0()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libspatialite.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 ---> Package unixODBC.x86_64 0:2.2.14-14.el6 will be installed --> Processing Dependency: libltdl.so.7()(64bit) for package: unixODBC-2.2.14-14.el6.x86_64 ---> Package xerces-c.x86_64 0:3.0.1-20.el6 will be installed --> Running transaction check ---> Package gdal.x86_64 0:1.8.1-1.el6 will be installed --> Processing Dependency: libcfitsio.so.0()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdap.so.11()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdapclient.so.3()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libdapserver.so.7()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libgeos_c.so.1()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libgeotiff.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libhdf5.so.6()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libnetcdf.so.6()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libogdi.so.3()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: librx.so.0()(64bit) for package: gdal-1.8.1-1.el6.x86_64 --> Processing Dependency: libspatialite.so.2()(64bit) for package: gdal-1.8.1-1.el6.x86_64 ---> Package libtool-ltdl.x86_64 0:2.2.6-15.5.el6 will be installed --> Finished Dependency Resolution Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libgeotiff.so.2()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libnetcdf.so.6()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libgeos_c.so.1()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: librx.so.0()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libcfitsio.so.0()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libhdf5.so.6()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libspatialite.so.2()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libdap.so.11()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libogdi.so.3()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libdapserver.so.7()(64bit) Error: Package: gdal-1.8.1-1.el6.x86_64 (/gdal-1.8.1-1.el6.x86_64) Requires: libdapclient.so.3()(64bit) You could try using --skip-broken to work around the problem ** Found 1 pre-existing rpmdb problem(s), 'yum check' output follows: lgtoclnt-8.2.3.7-1.x86_64 has missing requires of libcap.so.1()(64bit) every package of those below dependencies needs a similar number of package in his turn. which means I need to install like a 100 of packages manully. I have been straggling with this problem like 3 days now and I don't know how to fix it
r, redhat, yum, gdal, rgdal
2
395
0
https://stackoverflow.com/questions/49921609/install-rgdal-dependencies-on-local-redhat
49,875,962
Compiling gcc 4.5.0 in red hat 7
Im having trouble to compile gcc 4.5.0 in red hat 7. Im following the instructions from here ("The hard way", without libelf). I use following versions: # rpm -qa | grep -e libelf -e gmp -e mpfr -e mpc mpfr-3.1.1-4.el7.x86_64 mpfr-devel-3.1.1-4.el7.x86_64 elfutils-libelf-0.170-4.el7.x86_64 elfutils-libelf-devel-0.170-4.el7.x86_64 gmp-6.0.0-15.el7.x86_64 libmpc-1.0.1-3.el7.x86_64 gmp-devel-6.0.0-15.el7.x86_64 While compiling, he doesnt find mpc.h: checking for the correct version of gmp.h... yes checking for the correct version of mpfr.h... yes checking for the correct version of mpc.h... no configure: error: Building GCC requires GMP 4.2+, MPFR 2.3.1+ and MPC 0.8.0+. Try the --with-gmp, --with-mpfr and/or --with-mpc options to specify their locations. Source code for these libraries can be found at their respective hosting sites as well as at ftp://gcc.gnu.org/pub/gcc/infrastructure/. See also [URL] for additional info. If you obtained GMP, MPFR and/or MPC from a vendor distribution package, make sure that you have installed both the libraries and the header files. They may be located in separate packages. So I compiled it. Here is the working configure: /opt/app/gcc/gcc-4.5.0_SOURCE/configure --prefix=/opt/app/gcc-4.5.0 --enable-languages=c,c++,fortran --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --with-mpc=/opt/app/gcc/tmp/ At the end of make I get this: ... ar rc libgcc.a $objects ranlib libgcc.a make[5]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD/x86_64-unknown-linux-gnu/32/libgcc' make[4]: *** [multi-do] Error 1 make[4]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD/x86_64-unknown-linux-gnu/libgcc' make[3]: *** [all-multi] Error 2 make[3]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD/x86_64-unknown-linux-gnu/libgcc' make[2]: *** [all-stage1-target-libgcc] Error 2 make[2]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD' make[1]: *** [stage1-bubble] Error 2 make[1]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD' make: *** [all] Error 2 After some research I found out, that texinfo was missing. Install texinfo got me to a new failure: ... make[3]: *** [doc/gccint.info] Error 1 rm gcc.pod make[3]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD/gcc' make[2]: *** [all-stage1-gcc] Error 2 make[2]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD' make[1]: *** [stage1-bubble] Error 2 make[1]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD' make: *** [all] Error 2 After some research I found here that texinfo has a bug. Now Im trying to compile texinfo 4.13a but again, I get into trouble with no clear error message. Did anyone achieve to compile gcc 4.5.0 in redhat 7? UPDATE I can compile gcc 4.5.4, but ONLY IF texinfo is NOT installed ... kind regards
Compiling gcc 4.5.0 in red hat 7 Im having trouble to compile gcc 4.5.0 in red hat 7. Im following the instructions from here ("The hard way", without libelf). I use following versions: # rpm -qa | grep -e libelf -e gmp -e mpfr -e mpc mpfr-3.1.1-4.el7.x86_64 mpfr-devel-3.1.1-4.el7.x86_64 elfutils-libelf-0.170-4.el7.x86_64 elfutils-libelf-devel-0.170-4.el7.x86_64 gmp-6.0.0-15.el7.x86_64 libmpc-1.0.1-3.el7.x86_64 gmp-devel-6.0.0-15.el7.x86_64 While compiling, he doesnt find mpc.h: checking for the correct version of gmp.h... yes checking for the correct version of mpfr.h... yes checking for the correct version of mpc.h... no configure: error: Building GCC requires GMP 4.2+, MPFR 2.3.1+ and MPC 0.8.0+. Try the --with-gmp, --with-mpfr and/or --with-mpc options to specify their locations. Source code for these libraries can be found at their respective hosting sites as well as at ftp://gcc.gnu.org/pub/gcc/infrastructure/. See also [URL] for additional info. If you obtained GMP, MPFR and/or MPC from a vendor distribution package, make sure that you have installed both the libraries and the header files. They may be located in separate packages. So I compiled it. Here is the working configure: /opt/app/gcc/gcc-4.5.0_SOURCE/configure --prefix=/opt/app/gcc-4.5.0 --enable-languages=c,c++,fortran --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --with-mpc=/opt/app/gcc/tmp/ At the end of make I get this: ... ar rc libgcc.a $objects ranlib libgcc.a make[5]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD/x86_64-unknown-linux-gnu/32/libgcc' make[4]: *** [multi-do] Error 1 make[4]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD/x86_64-unknown-linux-gnu/libgcc' make[3]: *** [all-multi] Error 2 make[3]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD/x86_64-unknown-linux-gnu/libgcc' make[2]: *** [all-stage1-target-libgcc] Error 2 make[2]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD' make[1]: *** [stage1-bubble] Error 2 make[1]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD' make: *** [all] Error 2 After some research I found out, that texinfo was missing. Install texinfo got me to a new failure: ... make[3]: *** [doc/gccint.info] Error 1 rm gcc.pod make[3]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD/gcc' make[2]: *** [all-stage1-gcc] Error 2 make[2]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD' make[1]: *** [stage1-bubble] Error 2 make[1]: Leaving directory /opt/app/gcc/gcc-4.5.0_BUILD' make: *** [all] Error 2 After some research I found here that texinfo has a bug. Now Im trying to compile texinfo 4.13a but again, I get into trouble with no clear error message. Did anyone achieve to compile gcc 4.5.0 in redhat 7? UPDATE I can compile gcc 4.5.4, but ONLY IF texinfo is NOT installed ... kind regards
gcc, compilation, redhat
2
753
1
https://stackoverflow.com/questions/49875962/compiling-gcc-4-5-0-in-red-hat-7
49,699,113
Websphere Remote Debugging: Connection failing from Eclipse
I am trying to connect remotely to a WAS 8.5 instance running on RHEL 6 from my Eclipse desktop on Windows. However I'm having no luck and I can't see any errors on the server side. The only error I see is on Eclipse which basically says the connection couldn't be made and to check securiy settings if Security as been enabled. The connection failed after trying to use all the available connection types. Verify the port values are correct and the server has been started. If the security of the server is enabled, verify the "Security is enabled on this server" check box is selected, and the user ID and password are provided. You can specify this in the server editor or when creating a new server. For a Technote with details on the most common server connection problem, see [URL] . The last connection attempt failed with the following exception: ADMC0016E: The system cannot create a SOAP connector to connect to host [remote server] at port xxxx. Is there any logs I can turn on? I can see the request coming from my desktop machine on RHEL host but it seems either WAS is not getting it or something is failing or not set right in WAS. I am using WAS 8.5.5.1 runtimes on my desktop and WAS 8.5.5.9 on RHEL host. Thanks
Websphere Remote Debugging: Connection failing from Eclipse I am trying to connect remotely to a WAS 8.5 instance running on RHEL 6 from my Eclipse desktop on Windows. However I'm having no luck and I can't see any errors on the server side. The only error I see is on Eclipse which basically says the connection couldn't be made and to check securiy settings if Security as been enabled. The connection failed after trying to use all the available connection types. Verify the port values are correct and the server has been started. If the security of the server is enabled, verify the "Security is enabled on this server" check box is selected, and the user ID and password are provided. You can specify this in the server editor or when creating a new server. For a Technote with details on the most common server connection problem, see [URL] . The last connection attempt failed with the following exception: ADMC0016E: The system cannot create a SOAP connector to connect to host [remote server] at port xxxx. Is there any logs I can turn on? I can see the request coming from my desktop machine on RHEL host but it seems either WAS is not getting it or something is failing or not set right in WAS. I am using WAS 8.5.5.1 runtimes on my desktop and WAS 8.5.5.9 on RHEL host. Thanks
eclipse, websphere, redhat, remote-debugging
2
623
0
https://stackoverflow.com/questions/49699113/websphere-remote-debugging-connection-failing-from-eclipse
48,748,861
Birt error on Redhat when using charts
I get error when trying to render Birt reports containing charts on a redhat server. It works fine when reports don't contain any chart. It works fine when report are generated on windows. Birt version : 4.4.2 Redhat version : 7.2 Tomcat version : 8.0.39 Java version : openjdk version 1.8.0_91 OpenJDK Runtime Environment (build 1.8.0_91-b14) OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode) I tried all available formats for the chart : SVG, BMP, JPG and PNG. I get a different class in error when I use SVG but same kind of error. The main error is java.lang.NoClassDefFoundError: org.eclipse.birt.chart.device.image.JavaxImageIOWriter Here is the complet error stack : org.eclipse.birt.report.service.api.ReportServiceException: Error happened while running the report. AxisFault faultCode: {[URL] faultSubcode: faultString: org.eclipse.birt.report.service.api.ReportServiceException: Error happened while running the report. faultActor: faultNode: faultDetail: {[URL] Error happened while running the report. at org.eclipse.birt.report.service.ReportEngineService.throwDummyException(ReportEngineService.java:1115) at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(ReportEngineService.java:943) at org.eclipse.birt.report.service.BirtViewerReportService.runAndRenderReport(BirtViewerReportService.java:973) at org.eclipse.birt.report.service.actionhandler.BirtRunAndRenderActionHandler.__execute(BirtRunAndRenderActionHandler.java:76) at org.eclipse.birt.report.service.actionhandler.AbstractBaseActionHandler.execute(AbstractBaseActionHandler.java:90) at org.eclipse.birt.report.presentation.aggregation.layout.RunFragment.doService(RunFragment.java:120) at org.eclipse.birt.report.presentation.aggregation.layout.FramesetFragment.service(FramesetFragment.java:86) at org.eclipse.birt.report.servlet.ViewerServlet.__doGet(ViewerServlet.java:181) at org.eclipse.birt.report.servlet.BirtSoapMessageDispatcherServlet.doGet(BirtSoapMessageDispatcherServlet.java:160) at javax.servlet.http.HttpServlet.service(HttpServlet.java:622) at org.apache.axis.transport.http.AxisServletBase.service(AxisServletBase.java:327) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.eclipse.birt.report.servlet.BirtSoapMessageDispatcherServlet.service(BirtSoapMessageDispatcherServlet.java:122) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.eclipse.birt.report.filter.ViewerFilter.doFilter(ViewerFilter.java:68) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:509) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1104) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1520) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1476) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:745) Caused by: org.eclipse.birt.report.engine.api.EngineException: Error happened while running the report. at org.eclipse.birt.report.engine.api.impl.EngineTask.handleFatalExceptions(EngineTask.java:2396) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:191) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run(RunAndRenderTask.java:77) at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(ReportEngineService.java:937) ... 35 more Caused by: java.lang.NoClassDefFoundError: org.eclipse.birt.chart.device.image.JavaxImageIOWriter at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.eclipse.birt.core.framework.jar.ConfigurationElement.createExecutableExtension(ConfigurationElement.java:46) at org.eclipse.birt.core.framework.eclipse.EclipseConfigurationElement.createExecutableExtension(EclipseConfigurationElement.java:35) at org.eclipse.birt.chart.util.PluginSettings.getPluginXmlObject(PluginSettings.java:1258) at org.eclipse.birt.chart.util.PluginSettings.getDevice(PluginSettings.java:638) at org.eclipse.birt.chart.api.ChartEngine.getRenderer(ChartEngine.java:119) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.prepareDeviceRenderer(ChartReportItemPresentationBase.java:1223) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.generateRenderObject(ChartReportItemPresentationBase.java:972) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.onRowSets(ChartReportItemPresentationBase.java:905) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationProxy.onRowSets(ChartReportItemPresentationProxy.java:108) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.processExtendedContent(LocalizedContentVisitor.java:1100) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.localizeForeign(LocalizedContentVisitor.java:602) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.localize(LocalizedContentVisitor.java:176) at org.eclipse.birt.report.engine.internal.executor.l18n.LocalizedReportItemExecutor.execute(LocalizedReportItemExecutor.java:37) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.traverse(HTMLAbstractLM.java:441) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.traverse(HTMLAbstractLM.java:442) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.handleVisibility(HTMLAbstractLM.java:374) at org.eclipse.birt.report.engine.layout.html.HTMLRowLM.handleVisibility(HTMLRowLM.java:33) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.layout(HTMLAbstractLM.java:123) at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:71) at org.eclipse.birt.report.engine.layout.html.HTMLStackingLM.layoutChildren(HTMLStackingLM.java:26) at org.eclipse.birt.report.engine.layout.html.HTMLRepeatHeaderLM.layoutChildren(HTMLRepeatHeaderLM.java:46) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.layout(HTMLAbstractLM.java:140) at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:71) at org.eclipse.birt.report.engine.layout.html.HTMLPageLM.layout(HTMLPageLM.java:92) at org.eclipse.birt.report.engine.layout.html.HTMLReportLayoutEngine.layout(HTMLReportLayoutEngine.java:100) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:181) ... 37 more {[URL] {}:org.eclipse.birt.report.service.api.ReportServiceException: Error happened while running the report. at org.eclipse.birt.report.service.ReportEngineService.throwDummyException(ReportEngineService.java:1115) at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(ReportEngineService.java:943) at org.eclipse.birt.report.service.BirtViewerReportService.runAndRenderReport(BirtViewerReportService.java:973) at org.eclipse.birt.report.service.actionhandler.BirtRunAndRenderActionHandler.__execute(BirtRunAndRenderActionHandler.java:76) at org.eclipse.birt.report.service.actionhandler.AbstractBaseActionHandler.execute(AbstractBaseActionHandler.java:90) at org.eclipse.birt.report.presentation.aggregation.layout.RunFragment.doService(RunFragment.java:120) at org.eclipse.birt.report.presentation.aggregation.layout.FramesetFragment.service(FramesetFragment.java:86) at org.eclipse.birt.report.servlet.ViewerServlet.__doGet(ViewerServlet.java:181) at org.eclipse.birt.report.servlet.BirtSoapMessageDispatcherServlet.doGet(BirtSoapMessageDispatcherServlet.java:160) at javax.servlet.http.HttpServlet.service(HttpServlet.java:622) at org.apache.axis.transport.http.AxisServletBase.service(AxisServletBase.java:327) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.eclipse.birt.report.servlet.BirtSoapMessageDispatcherServlet.service(BirtSoapMessageDispatcherServlet.java:122) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.eclipse.birt.report.filter.ViewerFilter.doFilter(ViewerFilter.java:68) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:509) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1104) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1520) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1476) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:745) Caused by: org.eclipse.birt.report.engine.api.EngineException: Error happened while running the report. at org.eclipse.birt.report.engine.api.impl.EngineTask.handleFatalExceptions(EngineTask.java:2396) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:191) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run(RunAndRenderTask.java:77) at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(ReportEngineService.java:937) ... 35 more Caused by: java.lang.NoClassDefFoundError: org.eclipse.birt.chart.device.image.JavaxImageIOWriter at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.eclipse.birt.core.framework.jar.ConfigurationElement.createExecutableExtension(ConfigurationElement.java:46) at org.eclipse.birt.core.framework.eclipse.EclipseConfigurationElement.createExecutableExtension(EclipseConfigurationElement.java:35) at org.eclipse.birt.chart.util.PluginSettings.getPluginXmlObject(PluginSettings.java:1258) at org.eclipse.birt.chart.util.PluginSettings.getDevice(PluginSettings.java:638) at org.eclipse.birt.chart.api.ChartEngine.getRenderer(ChartEngine.java:119) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.prepareDeviceRenderer(ChartReportItemPresentationBase.java:1223) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.generateRenderObject(ChartReportItemPresentationBase.java:972) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.onRowSets(ChartReportItemPresentationBase.java:905) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationProxy.onRowSets(ChartReportItemPresentationProxy.java:108) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.processExtendedContent(LocalizedContentVisitor.java:1100) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.localizeForeign(LocalizedContentVisitor.java:602) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.localize(LocalizedContentVisitor.java:176) at org.eclipse.birt.report.engine.internal.executor.l18n.LocalizedReportItemExecutor.execute(LocalizedReportItemExecutor.java:37) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.traverse(HTMLAbstractLM.java:441) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.traverse(HTMLAbstractLM.java:442) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.handleVisibility(HTMLAbstractLM.java:374) at org.eclipse.birt.report.engine.layout.html.HTMLRowLM.handleVisibility(HTMLRowLM.java:33) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.layout(HTMLAbstractLM.java:123) at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:71) at org.eclipse.birt.report.engine.layout.html.HTMLStackingLM.layoutChildren(HTMLStackingLM.java:26) at org.eclipse.birt.report.engine.layout.html.HTMLRepeatHeaderLM.layoutChildren(HTMLRepeatHeaderLM.java:46) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.layout(HTMLAbstractLM.java:140) at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:71) at org.eclipse.birt.report.engine.layout.html.HTMLPageLM.layout(HTMLPageLM.java:92) at org.eclipse.birt.report.engine.layout.html.HTMLReportLayoutEngine.layout(HTMLReportLayoutEngine.java:100) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:181) ... 37 more org.eclipse.birt.report.service.api.ReportServiceException: Error happened while running the report. at org.apache.axis.AxisFault.makeFault(AxisFault.java:101) at org.eclipse.birt.report.utility.BirtUtility.makeAxisFault(BirtUtility.java:777) at org.eclipse.birt.report.service.actionhandler.AbstractBaseActionHandler.execute(AbstractBaseActionHandler.java:94) at org.eclipse.birt.report.presentation.aggregation.layout.RunFragment.doService(RunFragment.java:120) at org.eclipse.birt.report.presentation.aggregation.layout.FramesetFragment.service(FramesetFragment.java:86) at org.eclipse.birt.report.servlet.ViewerServlet.__doGet(ViewerServlet.java:181) at org.eclipse.birt.report.servlet.BirtSoapMessageDispatcherServlet.doGet(BirtSoapMessageDispatcherServlet.java:160) at javax.servlet.http.HttpServlet.service(HttpServlet.java:622) at org.apache.axis.transport.http.AxisServletBase.service(AxisServletBase.java:327) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.eclipse.birt.report.servlet.BirtSoapMessageDispatcherServlet.service(BirtSoapMessageDispatcherServlet.java:122) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.eclipse.birt.report.filter.ViewerFilter.doFilter(ViewerFilter.java:68) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:509) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1104) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1520) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1476) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:745) Caused by: org.eclipse.birt.report.service.api.ReportServiceException: Error happened while running the report. at org.eclipse.birt.report.service.ReportEngineService.throwDummyException(ReportEngineService.java:1115) at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(ReportEngineService.java:943) at org.eclipse.birt.report.service.BirtViewerReportService.runAndRenderReport(BirtViewerReportService.java:973) at org.eclipse.birt.report.service.actionhandler.BirtRunAndRenderActionHandler.__execute(BirtRunAndRenderActionHandler.java:76) at org.eclipse.birt.report.service.actionhandler.AbstractBaseActionHandler.execute(AbstractBaseActionHandler.java:90) ... 32 more Caused by: org.eclipse.birt.report.engine.api.EngineException: Error happened while running the report. at org.eclipse.birt.report.engine.api.impl.EngineTask.handleFatalExceptions(EngineTask.java:2396) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:191) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run(RunAndRenderTask.java:77) at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(ReportEngineService.java:937) ... 35 more Caused by: java.lang.NoClassDefFoundError: org.eclipse.birt.chart.device.image.JavaxImageIOWriter at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.eclipse.birt.core.framework.jar.ConfigurationElement.createExecutableExtension(ConfigurationElement.java:46) at org.eclipse.birt.core.framework.eclipse.EclipseConfigurationElement.createExecutableExtension(EclipseConfigurationElement.java:35) at org.eclipse.birt.chart.util.PluginSettings.getPluginXmlObject(PluginSettings.java:1258) at org.eclipse.birt.chart.util.PluginSettings.getDevice(PluginSettings.java:638) at org.eclipse.birt.chart.api.ChartEngine.getRenderer(ChartEngine.java:119) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.prepareDeviceRenderer(ChartReportItemPresentationBase.java:1223) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.generateRenderObject(ChartReportItemPresentationBase.java:972) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.onRowSets(ChartReportItemPresentationBase.java:905) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationProxy.onRowSets(ChartReportItemPresentationProxy.java:108) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.processExtendedContent(LocalizedContentVisitor.java:1100) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.localizeForeign(LocalizedContentVisitor.java:602) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.localize(LocalizedContentVisitor.java:176) at org.eclipse.birt.report.engine.internal.executor.l18n.LocalizedReportItemExecutor.execute(LocalizedReportItemExecutor.java:37) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.traverse(HTMLAbstractLM.java:441) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.traverse(HTMLAbstractLM.java:442) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.handleVisibility(HTMLAbstractLM.java:374) at org.eclipse.birt.report.engine.layout.html.HTMLRowLM.handleVisibility(HTMLRowLM.java:33) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.layout(HTMLAbstractLM.java:123) at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:71) at org.eclipse.birt.report.engine.layout.html.HTMLStackingLM.layoutChildren(HTMLStackingLM.java:26) at org.eclipse.birt.report.engine.layout.html.HTMLRepeatHeaderLM.layoutChildren(HTMLRepeatHeaderLM.java:46) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.layout(HTMLAbstractLM.java:140) at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:71) at org.eclipse.birt.report.engine.layout.html.HTMLPageLM.layout(HTMLPageLM.java:92) at org.eclipse.birt.report.engine.layout.html.HTMLReportLayoutEngine.layout(HTMLReportLayoutEngine.java:100) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:181) ... 37 more
Birt error on Redhat when using charts I get error when trying to render Birt reports containing charts on a redhat server. It works fine when reports don't contain any chart. It works fine when report are generated on windows. Birt version : 4.4.2 Redhat version : 7.2 Tomcat version : 8.0.39 Java version : openjdk version 1.8.0_91 OpenJDK Runtime Environment (build 1.8.0_91-b14) OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode) I tried all available formats for the chart : SVG, BMP, JPG and PNG. I get a different class in error when I use SVG but same kind of error. The main error is java.lang.NoClassDefFoundError: org.eclipse.birt.chart.device.image.JavaxImageIOWriter Here is the complet error stack : org.eclipse.birt.report.service.api.ReportServiceException: Error happened while running the report. AxisFault faultCode: {[URL] faultSubcode: faultString: org.eclipse.birt.report.service.api.ReportServiceException: Error happened while running the report. faultActor: faultNode: faultDetail: {[URL] Error happened while running the report. at org.eclipse.birt.report.service.ReportEngineService.throwDummyException(ReportEngineService.java:1115) at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(ReportEngineService.java:943) at org.eclipse.birt.report.service.BirtViewerReportService.runAndRenderReport(BirtViewerReportService.java:973) at org.eclipse.birt.report.service.actionhandler.BirtRunAndRenderActionHandler.__execute(BirtRunAndRenderActionHandler.java:76) at org.eclipse.birt.report.service.actionhandler.AbstractBaseActionHandler.execute(AbstractBaseActionHandler.java:90) at org.eclipse.birt.report.presentation.aggregation.layout.RunFragment.doService(RunFragment.java:120) at org.eclipse.birt.report.presentation.aggregation.layout.FramesetFragment.service(FramesetFragment.java:86) at org.eclipse.birt.report.servlet.ViewerServlet.__doGet(ViewerServlet.java:181) at org.eclipse.birt.report.servlet.BirtSoapMessageDispatcherServlet.doGet(BirtSoapMessageDispatcherServlet.java:160) at javax.servlet.http.HttpServlet.service(HttpServlet.java:622) at org.apache.axis.transport.http.AxisServletBase.service(AxisServletBase.java:327) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.eclipse.birt.report.servlet.BirtSoapMessageDispatcherServlet.service(BirtSoapMessageDispatcherServlet.java:122) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.eclipse.birt.report.filter.ViewerFilter.doFilter(ViewerFilter.java:68) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:509) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1104) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1520) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1476) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:745) Caused by: org.eclipse.birt.report.engine.api.EngineException: Error happened while running the report. at org.eclipse.birt.report.engine.api.impl.EngineTask.handleFatalExceptions(EngineTask.java:2396) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:191) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run(RunAndRenderTask.java:77) at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(ReportEngineService.java:937) ... 35 more Caused by: java.lang.NoClassDefFoundError: org.eclipse.birt.chart.device.image.JavaxImageIOWriter at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.eclipse.birt.core.framework.jar.ConfigurationElement.createExecutableExtension(ConfigurationElement.java:46) at org.eclipse.birt.core.framework.eclipse.EclipseConfigurationElement.createExecutableExtension(EclipseConfigurationElement.java:35) at org.eclipse.birt.chart.util.PluginSettings.getPluginXmlObject(PluginSettings.java:1258) at org.eclipse.birt.chart.util.PluginSettings.getDevice(PluginSettings.java:638) at org.eclipse.birt.chart.api.ChartEngine.getRenderer(ChartEngine.java:119) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.prepareDeviceRenderer(ChartReportItemPresentationBase.java:1223) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.generateRenderObject(ChartReportItemPresentationBase.java:972) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.onRowSets(ChartReportItemPresentationBase.java:905) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationProxy.onRowSets(ChartReportItemPresentationProxy.java:108) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.processExtendedContent(LocalizedContentVisitor.java:1100) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.localizeForeign(LocalizedContentVisitor.java:602) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.localize(LocalizedContentVisitor.java:176) at org.eclipse.birt.report.engine.internal.executor.l18n.LocalizedReportItemExecutor.execute(LocalizedReportItemExecutor.java:37) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.traverse(HTMLAbstractLM.java:441) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.traverse(HTMLAbstractLM.java:442) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.handleVisibility(HTMLAbstractLM.java:374) at org.eclipse.birt.report.engine.layout.html.HTMLRowLM.handleVisibility(HTMLRowLM.java:33) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.layout(HTMLAbstractLM.java:123) at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:71) at org.eclipse.birt.report.engine.layout.html.HTMLStackingLM.layoutChildren(HTMLStackingLM.java:26) at org.eclipse.birt.report.engine.layout.html.HTMLRepeatHeaderLM.layoutChildren(HTMLRepeatHeaderLM.java:46) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.layout(HTMLAbstractLM.java:140) at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:71) at org.eclipse.birt.report.engine.layout.html.HTMLPageLM.layout(HTMLPageLM.java:92) at org.eclipse.birt.report.engine.layout.html.HTMLReportLayoutEngine.layout(HTMLReportLayoutEngine.java:100) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:181) ... 37 more {[URL] {}:org.eclipse.birt.report.service.api.ReportServiceException: Error happened while running the report. at org.eclipse.birt.report.service.ReportEngineService.throwDummyException(ReportEngineService.java:1115) at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(ReportEngineService.java:943) at org.eclipse.birt.report.service.BirtViewerReportService.runAndRenderReport(BirtViewerReportService.java:973) at org.eclipse.birt.report.service.actionhandler.BirtRunAndRenderActionHandler.__execute(BirtRunAndRenderActionHandler.java:76) at org.eclipse.birt.report.service.actionhandler.AbstractBaseActionHandler.execute(AbstractBaseActionHandler.java:90) at org.eclipse.birt.report.presentation.aggregation.layout.RunFragment.doService(RunFragment.java:120) at org.eclipse.birt.report.presentation.aggregation.layout.FramesetFragment.service(FramesetFragment.java:86) at org.eclipse.birt.report.servlet.ViewerServlet.__doGet(ViewerServlet.java:181) at org.eclipse.birt.report.servlet.BirtSoapMessageDispatcherServlet.doGet(BirtSoapMessageDispatcherServlet.java:160) at javax.servlet.http.HttpServlet.service(HttpServlet.java:622) at org.apache.axis.transport.http.AxisServletBase.service(AxisServletBase.java:327) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.eclipse.birt.report.servlet.BirtSoapMessageDispatcherServlet.service(BirtSoapMessageDispatcherServlet.java:122) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.eclipse.birt.report.filter.ViewerFilter.doFilter(ViewerFilter.java:68) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:509) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1104) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1520) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1476) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:745) Caused by: org.eclipse.birt.report.engine.api.EngineException: Error happened while running the report. at org.eclipse.birt.report.engine.api.impl.EngineTask.handleFatalExceptions(EngineTask.java:2396) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:191) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run(RunAndRenderTask.java:77) at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(ReportEngineService.java:937) ... 35 more Caused by: java.lang.NoClassDefFoundError: org.eclipse.birt.chart.device.image.JavaxImageIOWriter at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.eclipse.birt.core.framework.jar.ConfigurationElement.createExecutableExtension(ConfigurationElement.java:46) at org.eclipse.birt.core.framework.eclipse.EclipseConfigurationElement.createExecutableExtension(EclipseConfigurationElement.java:35) at org.eclipse.birt.chart.util.PluginSettings.getPluginXmlObject(PluginSettings.java:1258) at org.eclipse.birt.chart.util.PluginSettings.getDevice(PluginSettings.java:638) at org.eclipse.birt.chart.api.ChartEngine.getRenderer(ChartEngine.java:119) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.prepareDeviceRenderer(ChartReportItemPresentationBase.java:1223) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.generateRenderObject(ChartReportItemPresentationBase.java:972) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.onRowSets(ChartReportItemPresentationBase.java:905) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationProxy.onRowSets(ChartReportItemPresentationProxy.java:108) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.processExtendedContent(LocalizedContentVisitor.java:1100) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.localizeForeign(LocalizedContentVisitor.java:602) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.localize(LocalizedContentVisitor.java:176) at org.eclipse.birt.report.engine.internal.executor.l18n.LocalizedReportItemExecutor.execute(LocalizedReportItemExecutor.java:37) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.traverse(HTMLAbstractLM.java:441) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.traverse(HTMLAbstractLM.java:442) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.handleVisibility(HTMLAbstractLM.java:374) at org.eclipse.birt.report.engine.layout.html.HTMLRowLM.handleVisibility(HTMLRowLM.java:33) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.layout(HTMLAbstractLM.java:123) at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:71) at org.eclipse.birt.report.engine.layout.html.HTMLStackingLM.layoutChildren(HTMLStackingLM.java:26) at org.eclipse.birt.report.engine.layout.html.HTMLRepeatHeaderLM.layoutChildren(HTMLRepeatHeaderLM.java:46) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.layout(HTMLAbstractLM.java:140) at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:71) at org.eclipse.birt.report.engine.layout.html.HTMLPageLM.layout(HTMLPageLM.java:92) at org.eclipse.birt.report.engine.layout.html.HTMLReportLayoutEngine.layout(HTMLReportLayoutEngine.java:100) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:181) ... 37 more org.eclipse.birt.report.service.api.ReportServiceException: Error happened while running the report. at org.apache.axis.AxisFault.makeFault(AxisFault.java:101) at org.eclipse.birt.report.utility.BirtUtility.makeAxisFault(BirtUtility.java:777) at org.eclipse.birt.report.service.actionhandler.AbstractBaseActionHandler.execute(AbstractBaseActionHandler.java:94) at org.eclipse.birt.report.presentation.aggregation.layout.RunFragment.doService(RunFragment.java:120) at org.eclipse.birt.report.presentation.aggregation.layout.FramesetFragment.service(FramesetFragment.java:86) at org.eclipse.birt.report.servlet.ViewerServlet.__doGet(ViewerServlet.java:181) at org.eclipse.birt.report.servlet.BirtSoapMessageDispatcherServlet.doGet(BirtSoapMessageDispatcherServlet.java:160) at javax.servlet.http.HttpServlet.service(HttpServlet.java:622) at org.apache.axis.transport.http.AxisServletBase.service(AxisServletBase.java:327) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.eclipse.birt.report.servlet.BirtSoapMessageDispatcherServlet.service(BirtSoapMessageDispatcherServlet.java:122) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.eclipse.birt.report.filter.ViewerFilter.doFilter(ViewerFilter.java:68) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:509) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1104) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1520) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1476) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:745) Caused by: org.eclipse.birt.report.service.api.ReportServiceException: Error happened while running the report. at org.eclipse.birt.report.service.ReportEngineService.throwDummyException(ReportEngineService.java:1115) at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(ReportEngineService.java:943) at org.eclipse.birt.report.service.BirtViewerReportService.runAndRenderReport(BirtViewerReportService.java:973) at org.eclipse.birt.report.service.actionhandler.BirtRunAndRenderActionHandler.__execute(BirtRunAndRenderActionHandler.java:76) at org.eclipse.birt.report.service.actionhandler.AbstractBaseActionHandler.execute(AbstractBaseActionHandler.java:90) ... 32 more Caused by: org.eclipse.birt.report.engine.api.EngineException: Error happened while running the report. at org.eclipse.birt.report.engine.api.impl.EngineTask.handleFatalExceptions(EngineTask.java:2396) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:191) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run(RunAndRenderTask.java:77) at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(ReportEngineService.java:937) ... 35 more Caused by: java.lang.NoClassDefFoundError: org.eclipse.birt.chart.device.image.JavaxImageIOWriter at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.eclipse.birt.core.framework.jar.ConfigurationElement.createExecutableExtension(ConfigurationElement.java:46) at org.eclipse.birt.core.framework.eclipse.EclipseConfigurationElement.createExecutableExtension(EclipseConfigurationElement.java:35) at org.eclipse.birt.chart.util.PluginSettings.getPluginXmlObject(PluginSettings.java:1258) at org.eclipse.birt.chart.util.PluginSettings.getDevice(PluginSettings.java:638) at org.eclipse.birt.chart.api.ChartEngine.getRenderer(ChartEngine.java:119) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.prepareDeviceRenderer(ChartReportItemPresentationBase.java:1223) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.generateRenderObject(ChartReportItemPresentationBase.java:972) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationBase.onRowSets(ChartReportItemPresentationBase.java:905) at org.eclipse.birt.chart.reportitem.ChartReportItemPresentationProxy.onRowSets(ChartReportItemPresentationProxy.java:108) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.processExtendedContent(LocalizedContentVisitor.java:1100) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.localizeForeign(LocalizedContentVisitor.java:602) at org.eclipse.birt.report.engine.presentation.LocalizedContentVisitor.localize(LocalizedContentVisitor.java:176) at org.eclipse.birt.report.engine.internal.executor.l18n.LocalizedReportItemExecutor.execute(LocalizedReportItemExecutor.java:37) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.traverse(HTMLAbstractLM.java:441) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.traverse(HTMLAbstractLM.java:442) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.handleVisibility(HTMLAbstractLM.java:374) at org.eclipse.birt.report.engine.layout.html.HTMLRowLM.handleVisibility(HTMLRowLM.java:33) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.layout(HTMLAbstractLM.java:123) at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:71) at org.eclipse.birt.report.engine.layout.html.HTMLStackingLM.layoutChildren(HTMLStackingLM.java:26) at org.eclipse.birt.report.engine.layout.html.HTMLRepeatHeaderLM.layoutChildren(HTMLRepeatHeaderLM.java:46) at org.eclipse.birt.report.engine.layout.html.HTMLAbstractLM.layout(HTMLAbstractLM.java:140) at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:71) at org.eclipse.birt.report.engine.layout.html.HTMLPageLM.layout(HTMLPageLM.java:92) at org.eclipse.birt.report.engine.layout.html.HTMLReportLayoutEngine.layout(HTMLReportLayoutEngine.java:100) at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:181) ... 37 more
linux, redhat, birt
2
215
0
https://stackoverflow.com/questions/48748861/birt-error-on-redhat-when-using-charts
48,416,295
ODBC Issue in Red Hat linux release 6.9:
ODBC Issue The database reported an error: [unixODBC][Driver Manager]Data Source name not found, and no default driver specified As shown as below, QT application trying to connect a database file of type DBF ( .dbf ), when we try to execute the application we got the above error. QSqlDatabase db; db = QSqlDatabase::addDatabase("QODBC"); QString str("DRIVER={Microsoft dBase Driver (*.dbf)}; DBQ=/path/to/dbf/files"); db.setDatabaseName(str); if(db.open()) { ... } else { ...// Failure } Referred the link ( [URL] ) and followed the below steps: The configuration files odbc.ini and odbcinst.ini are present with the appropriate content. Exported the variables ODBCSYSINI , ODBCINSTINI and ODBCINI with /etc/odbc.ini , /etc/odbcinst.ini and /home/user/.odbc.ini From the same shell where we have exported the variables, tried to execute the application however encountered with the error "The database reported an error: [unixODBC][Driver Manager]Data Source name not found, and no default driver specified Please find the content of the odbc.ini and odbcinst.ini odbc.ini file: [ODBC Data Sources] TestODBC=MyODBCDriver [TestODBC] Driver=path/to/driver file DataDirectory=path/to/where my dbf files resides [Default] Driver=path/to/driverfile DataDirectory=path/to/where my dbf files resides odbcinst.ini file: [ODBC Drivers] MyODBCDRIVER=Installed [MyODBCDriver] Description=ODBC Driver Driver=/path/to Driver file [ODBC] Trace = Yes Please provide any suggestion or solution to resolve the issue
ODBC Issue in Red Hat linux release 6.9: ODBC Issue The database reported an error: [unixODBC][Driver Manager]Data Source name not found, and no default driver specified As shown as below, QT application trying to connect a database file of type DBF ( .dbf ), when we try to execute the application we got the above error. QSqlDatabase db; db = QSqlDatabase::addDatabase("QODBC"); QString str("DRIVER={Microsoft dBase Driver (*.dbf)}; DBQ=/path/to/dbf/files"); db.setDatabaseName(str); if(db.open()) { ... } else { ...// Failure } Referred the link ( [URL] ) and followed the below steps: The configuration files odbc.ini and odbcinst.ini are present with the appropriate content. Exported the variables ODBCSYSINI , ODBCINSTINI and ODBCINI with /etc/odbc.ini , /etc/odbcinst.ini and /home/user/.odbc.ini From the same shell where we have exported the variables, tried to execute the application however encountered with the error "The database reported an error: [unixODBC][Driver Manager]Data Source name not found, and no default driver specified Please find the content of the odbc.ini and odbcinst.ini odbc.ini file: [ODBC Data Sources] TestODBC=MyODBCDriver [TestODBC] Driver=path/to/driver file DataDirectory=path/to/where my dbf files resides [Default] Driver=path/to/driverfile DataDirectory=path/to/where my dbf files resides odbcinst.ini file: [ODBC Drivers] MyODBCDRIVER=Installed [MyODBCDriver] Description=ODBC Driver Driver=/path/to Driver file [ODBC] Trace = Yes Please provide any suggestion or solution to resolve the issue
qt, redhat, unixodbc
2
273
0
https://stackoverflow.com/questions/48416295/odbc-issue-in-red-hat-linux-release-6-9
47,914,141
Spring Boot and Keycloack
I would like a help to make my spring boot applications safer. I have a RESTful API currently with no security implemented. This API is accessed by another spring boot application through HTTP requests (GET, POST, PUT ...). Recently, I walked through a REDHAT tutorial which demonstrated how to make a safer spring boot application using keycloak. I want to learn how can I use this security combination (springsecurity-keycloak) for a springboot application having a desktop application (also in java) as its client. Any advice would come in handy. Thank you, Celso
Spring Boot and Keycloack I would like a help to make my spring boot applications safer. I have a RESTful API currently with no security implemented. This API is accessed by another spring boot application through HTTP requests (GET, POST, PUT ...). Recently, I walked through a REDHAT tutorial which demonstrated how to make a safer spring boot application using keycloak. I want to learn how can I use this security combination (springsecurity-keycloak) for a springboot application having a desktop application (also in java) as its client. Any advice would come in handy. Thank you, Celso
java, spring-boot, spring-security, redhat, keycloak
2
784
1
https://stackoverflow.com/questions/47914141/spring-boot-and-keycloack
46,968,578
Multiple Aliases for different commands
I am attempting to create a bash script that will ssh into remote network devices, run commands based on the model, and then save the output. At this time I have my expect file that contains the following: #!/user/bin/expect set pw xxxxxxx set timeout 5 spawn ssh [lindex $argv 0] expect "TACACS Password:" send "$pw\r" interact I have my .sh file that contains variables which allows me to login to separate "host" files based on Model type. It contains: shopt -s expand_aliases fpath="path where scripts are located" opath="MAC_Results.log" for i in $( cat $fpath/3560hosts ) do expect script.exp $i >> "$opath" done When I run my .sh, everything operates as expected. My issue lies in I do not know how to call my aliases. I have edited the .bashrc and have sourced it. The .bashrc contains the following: # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions alias loc3560="term length 0; show mac address-table | ex Gi|CPU|Po; exit" alias locx="term length 0; show mac address-table | ex Gi[1|2]/1|CPU|Vl99|Po1|dynamic|system; exit" I have also added the aliases within my .sh aliases. but cant seem to get the right syntax. I have tried the following variations but with no success... for i in $( cat $fpath/3560hosts ) do expect script.exp $i $loc3560 >> "$opath" done and for i in $( cat $fpath/3560hosts ) do expect script.exp $i >> "$opath"; $loc3560 done Would appreciate any suggestions on where to put these to call to them.
Multiple Aliases for different commands I am attempting to create a bash script that will ssh into remote network devices, run commands based on the model, and then save the output. At this time I have my expect file that contains the following: #!/user/bin/expect set pw xxxxxxx set timeout 5 spawn ssh [lindex $argv 0] expect "TACACS Password:" send "$pw\r" interact I have my .sh file that contains variables which allows me to login to separate "host" files based on Model type. It contains: shopt -s expand_aliases fpath="path where scripts are located" opath="MAC_Results.log" for i in $( cat $fpath/3560hosts ) do expect script.exp $i >> "$opath" done When I run my .sh, everything operates as expected. My issue lies in I do not know how to call my aliases. I have edited the .bashrc and have sourced it. The .bashrc contains the following: # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions alias loc3560="term length 0; show mac address-table | ex Gi|CPU|Po; exit" alias locx="term length 0; show mac address-table | ex Gi[1|2]/1|CPU|Vl99|Po1|dynamic|system; exit" I have also added the aliases within my .sh aliases. but cant seem to get the right syntax. I have tried the following variations but with no success... for i in $( cat $fpath/3560hosts ) do expect script.exp $i $loc3560 >> "$opath" done and for i in $( cat $fpath/3560hosts ) do expect script.exp $i >> "$opath"; $loc3560 done Would appreciate any suggestions on where to put these to call to them.
bash, unix, redhat
2
118
1
https://stackoverflow.com/questions/46968578/multiple-aliases-for-different-commands
46,555,559
PHP move_uploaded_file to NFS mount that is full
PHP 5.6 / Apache 2.4 / Redhat 6.7 Using PHP's move_uploaded_file to try and save an uploaded file to an NFS mount that is 100% full, I would expect it to return false and throw an error. However, no error is thrown and the file is moved to the NFS mount, but the filesize is 0 bytes after being moved. Has anyone else encountered this before? I'm curious as to why an error is not thrown and PHP thinks it has successfully moved the uploaded file, but in reality the files contents are not copied and it is 0 bytes in size. Obviously, I can manually check the filesize of the tmp file before moving it, and then compare it against the filesize of the destination file after it is moved, but I would like to understand why this happens.
PHP move_uploaded_file to NFS mount that is full PHP 5.6 / Apache 2.4 / Redhat 6.7 Using PHP's move_uploaded_file to try and save an uploaded file to an NFS mount that is 100% full, I would expect it to return false and throw an error. However, no error is thrown and the file is moved to the NFS mount, but the filesize is 0 bytes after being moved. Has anyone else encountered this before? I'm curious as to why an error is not thrown and PHP thinks it has successfully moved the uploaded file, but in reality the files contents are not copied and it is 0 bytes in size. Obviously, I can manually check the filesize of the tmp file before moving it, and then compare it against the filesize of the destination file after it is moved, but I would like to understand why this happens.
php, redhat, nfs
2
275
0
https://stackoverflow.com/questions/46555559/php-move-uploaded-file-to-nfs-mount-that-is-full
46,430,265
Unable to generate system core dump after setting &quot;ulimit -c unlimited&quot;
I'm running on: Red Hat Enterprise Linux Server release 6.3 (Santiago) Error: Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again After setting the core file size to unlimited, and confirming the settings with: $ ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 773690 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 65536 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I am still unable to have the JVM crash create the system core dump... Any ideas on what may be preventing this? One thing that should be noted, if I write a little C++ program which intentionally causes a segmentation fault, then the coredump file is generated immediately: #include <signal.h> int main() { raise (SIGSEGV); } $ ./crash Segmentation fault (core dumped) Produces: core.43969
Unable to generate system core dump after setting &quot;ulimit -c unlimited&quot; I'm running on: Red Hat Enterprise Linux Server release 6.3 (Santiago) Error: Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again After setting the core file size to unlimited, and confirming the settings with: $ ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 773690 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 65536 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I am still unable to have the JVM crash create the system core dump... Any ideas on what may be preventing this? One thing that should be noted, if I write a little C++ program which intentionally causes a segmentation fault, then the coredump file is generated immediately: #include <signal.h> int main() { raise (SIGSEGV); } $ ./crash Segmentation fault (core dumped) Produces: core.43969
java, jvm, redhat, coredump
2
3,208
0
https://stackoverflow.com/questions/46430265/unable-to-generate-system-core-dump-after-setting-ulimit-c-unlimited
46,347,750
I have the following settings in php.ini but they aren&#39;t showing up in phpinfo
I have apcu enabled - version 4.0.11 Apache version: Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.2k-fips PHP/5.6.31 This is at the end of php.ini apc.cache_by_default=On apc.file_update_protection=2 apc.filters= apc.max_file_size=1M apc.num_files_hint=5024 apc.stat=1 apc.write_lock=On I have also tried to put it in /etc/php.d/40-apcu.ini, but it had no effect. I restarted apache after changing the files. The reason I am trying to enable these variables is because we are upgrading to another server and these were the values on the old server. If they are no longer needed or supported that is fine, but I could not find any documentation saying that. EDIT: Relevant PHP info
I have the following settings in php.ini but they aren&#39;t showing up in phpinfo I have apcu enabled - version 4.0.11 Apache version: Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.2k-fips PHP/5.6.31 This is at the end of php.ini apc.cache_by_default=On apc.file_update_protection=2 apc.filters= apc.max_file_size=1M apc.num_files_hint=5024 apc.stat=1 apc.write_lock=On I have also tried to put it in /etc/php.d/40-apcu.ini, but it had no effect. I restarted apache after changing the files. The reason I am trying to enable these variables is because we are upgrading to another server and these were the values on the old server. If they are no longer needed or supported that is fine, but I could not find any documentation saying that. EDIT: Relevant PHP info
php, apache, redhat, ini, apc
2
1,849
2
https://stackoverflow.com/questions/46347750/i-have-the-following-settings-in-php-ini-but-they-arent-showing-up-in-phpinfo
45,821,749
RHEL 6 SCL script not sourcing for normal user
I'm using devtoolset-2 on a rhel 6.9 installation so I can use the gcc 4.8 version that devtoolset-2 offers. On a previous rhel 6.2 installation (on a VM) I was able to enable devtoolset-2's gcc by adding a script in /etc/profile.d/ to source devtoolset-2's enable script: $ cat /etc/profile.d/devtoolset2.sh #!/bin/bash source scl_source enable devtoolset-2 That worked great, giving me access to gcc 4.8 for any terminal window I opened. Now on this new 6.9 install (on real hardware) I've tried the same script in the same location, but it never sources. New terminal windows always default to the system's gcc 4.4. I can, however, manually source the enable script and it does work: $ gcc --version gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18) $ source scl_source enable devtoolset-2 $ gcc --version gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-15) After googling I tried sourcing the script with several different commands as well: . /opt/rh/devtoolset-2/enable source /opt/rh/devtoolset-2/enable ... etc. I want this setting to apply to all user's terminals, but just to be complete I tried sourcing it from both my .bashrc and .bash_profile scripts and neither worked for my user. One last thing I noticed is that if I logged in as root, instead of a normal user, the script in /etc/profile.d/ did source devtoolset-2 just fine. Any ideas why it would source automatically for root, but not any other users?
RHEL 6 SCL script not sourcing for normal user I'm using devtoolset-2 on a rhel 6.9 installation so I can use the gcc 4.8 version that devtoolset-2 offers. On a previous rhel 6.2 installation (on a VM) I was able to enable devtoolset-2's gcc by adding a script in /etc/profile.d/ to source devtoolset-2's enable script: $ cat /etc/profile.d/devtoolset2.sh #!/bin/bash source scl_source enable devtoolset-2 That worked great, giving me access to gcc 4.8 for any terminal window I opened. Now on this new 6.9 install (on real hardware) I've tried the same script in the same location, but it never sources. New terminal windows always default to the system's gcc 4.4. I can, however, manually source the enable script and it does work: $ gcc --version gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18) $ source scl_source enable devtoolset-2 $ gcc --version gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-15) After googling I tried sourcing the script with several different commands as well: . /opt/rh/devtoolset-2/enable source /opt/rh/devtoolset-2/enable ... etc. I want this setting to apply to all user's terminals, but just to be complete I tried sourcing it from both my .bashrc and .bash_profile scripts and neither worked for my user. One last thing I noticed is that if I logged in as root, instead of a normal user, the script in /etc/profile.d/ did source devtoolset-2 just fine. Any ideas why it would source automatically for root, but not any other users?
redhat, rhel, software-collections, rhel-scl
2
534
0
https://stackoverflow.com/questions/45821749/rhel-6-scl-script-not-sourcing-for-normal-user
45,761,813
No rule to make target all.R, needed by compiler.rdb R 3.4.1 from source, Scientific Linux
I am trying to build R from source on Scientific Linux release 6.9 (Carbon), Linux version 2.6.32-696.3.2.el6.x86_64 (Red Hat 4.4.7-18). I load the needed modules and run: ./configure --prefix $install_dir --with-blas --with-lapack --enable-R-shlib 2>&1 | tee config-R-$version.log The "configure" command seems to run ok: R is now configured for x86_64-pc-linux-gnu Source directory: . Installation directory: /fastdata/mbp15ja/R-3.4.1 C compiler: gcc -I/usr/local/packages6/compilers/gcc/5.4.0/include Fortran 77 compiler: gfortran -g -O2 Default C++ compiler: g++ -g -O2 C++98 compiler: g++ -g -O2 C++11 compiler: g++ -std=gnu++11 -g -O2 C++14 compiler: g++ -std=gnu++14 -g -O2 C++17 compiler: Fortran 90/95 compiler: gfortran -g -O2 Obj-C compiler: Interfaces supported: X11, tcltk External libraries: readline, curl Additional capabilities: PNG, JPEG, NLS, cairo Options enabled: shared R library, shared BLAS, R profiling Capabilities skipped: TIFF, ICU Options not enabled: memory profiling Recommended packages: yes I encountered an error running "make -n" with the libtre library not generating libtre.a: rm -rf libnmath.a ar -cr libnmath.a mlutils.o d1mach.o i1mach.o fmax2.o fmin2.o fprec.o fround.o ftrunc.o sign.o fsign.o imax2.o imin2.o chebyshev.o log1p.o expm1.o lgammacor.o gammalims.o stirlerr.o bd0.o gamma.o lgamma.o gamma_cody.o beta.o lbeta.o polygamma.o cospi.o bessel_i.o bessel_j.o bessel_k.o bessel_y.o choose.o snorm.o sexp.o dgamma.o pgamma.o qgamma.o rgamma.o dbeta.o pbeta.o qbeta.o rbeta.o dunif.o punif.o qunif.o runif.o dnorm.o pnorm.o qnorm.o rnorm.o dlnorm.o plnorm.o qlnorm.o rlnorm.o df.o pf.o qf.o rf.o dnf.o dt.o pt.o qt.o rt.o dnt.o dchisq.o pchisq.o qchisq.o rchisq.o rnchisq.o dbinom.o pbinom.o qbinom.o rbinom.o rmultinom.o dcauchy.o pcauchy.o qcauchy.o rcauchy.o dexp.o pexp.o qexp.o rexp.o dgeom.o pgeom.o qgeom.o rgeom.o dhyper.o phyper.o qhyper.o rhyper.o dnbinom.o pnbinom.o qnbinom.o rnbinom.o dpois.o ppois.o qpois.o rpois.o dweibull.o pweibull.o qweibull.o rweibull.o dlogis.o plogis.o qlogis.o rlogis.o dnchisq.o pnchisq.o qnchisq.o dnbeta.o pnbeta.o qnbeta.o pnf.o pnt.o qnf.o qnt.o ptukey.o qtukey.o toms708.o wilcox.o signrank.o ranlib libnmath.a make[4]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/nmath' make[3]: *** No rule to make target ../extra/tre/libtre.a', needed by libR.so'. Stop. make[3]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/main' make[2]: *** [R] Error 2 make[2]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/main' make[1]: *** [R] Error 1 make[1]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src' make: *** [R] Error 1 I read in here [URL] that the library isn't supposed to make libtre.a and that they assume the file they are looking for is libtre.so I therefore built the libtre and linked libtre.so from the library to src/extra/tre/libtre.a and now the make "make -n" now fails with: if test -f ./NAMESPACE; then \ /usr/bin/install -c -m 644 ./NAMESPACE ../../../library/compiler; \ fi rm -f ../../../library/compiler/Meta/nsInfo.rds if test -f DESCRIPTION; then \ if test "" != ""; then \ echo "tools:::.install_package_description('.', '../../../library/compiler', '')" | \ R_DEFAULT_PACKAGES=NULL R_ENABLE_JIT=0 ../../../bin/R --vanilla --slave > /dev/null ; \ else \ echo "tools:::.install_package_description('.', '../../../library/compiler')" | \ R_DEFAULT_PACKAGES=NULL R_ENABLE_JIT=0 ../../../bin/R --vanilla --slave > /dev/null ; \ fi; \ fi make[4]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/library/compiler' make mklazycomp make[4]: Entering directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/library/compiler' make[4]: *** No rule to make target all.R', needed by ../../../library/compiler/R/compiler.rdb'. Stop. make[4]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/library/compiler' make[3]: *** [all] Error 2 make[3]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/library/compiler' make[2]: *** [R] Error 1 make[2]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/library' make[1]: *** [R] Error 1 make[1]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src' make: *** [R] Error 1 I haven't seen anything regarding all.R and ../../../library/compiler/R/compiler.rdb Thanks in advance!
No rule to make target all.R, needed by compiler.rdb R 3.4.1 from source, Scientific Linux I am trying to build R from source on Scientific Linux release 6.9 (Carbon), Linux version 2.6.32-696.3.2.el6.x86_64 (Red Hat 4.4.7-18). I load the needed modules and run: ./configure --prefix $install_dir --with-blas --with-lapack --enable-R-shlib 2>&1 | tee config-R-$version.log The "configure" command seems to run ok: R is now configured for x86_64-pc-linux-gnu Source directory: . Installation directory: /fastdata/mbp15ja/R-3.4.1 C compiler: gcc -I/usr/local/packages6/compilers/gcc/5.4.0/include Fortran 77 compiler: gfortran -g -O2 Default C++ compiler: g++ -g -O2 C++98 compiler: g++ -g -O2 C++11 compiler: g++ -std=gnu++11 -g -O2 C++14 compiler: g++ -std=gnu++14 -g -O2 C++17 compiler: Fortran 90/95 compiler: gfortran -g -O2 Obj-C compiler: Interfaces supported: X11, tcltk External libraries: readline, curl Additional capabilities: PNG, JPEG, NLS, cairo Options enabled: shared R library, shared BLAS, R profiling Capabilities skipped: TIFF, ICU Options not enabled: memory profiling Recommended packages: yes I encountered an error running "make -n" with the libtre library not generating libtre.a: rm -rf libnmath.a ar -cr libnmath.a mlutils.o d1mach.o i1mach.o fmax2.o fmin2.o fprec.o fround.o ftrunc.o sign.o fsign.o imax2.o imin2.o chebyshev.o log1p.o expm1.o lgammacor.o gammalims.o stirlerr.o bd0.o gamma.o lgamma.o gamma_cody.o beta.o lbeta.o polygamma.o cospi.o bessel_i.o bessel_j.o bessel_k.o bessel_y.o choose.o snorm.o sexp.o dgamma.o pgamma.o qgamma.o rgamma.o dbeta.o pbeta.o qbeta.o rbeta.o dunif.o punif.o qunif.o runif.o dnorm.o pnorm.o qnorm.o rnorm.o dlnorm.o plnorm.o qlnorm.o rlnorm.o df.o pf.o qf.o rf.o dnf.o dt.o pt.o qt.o rt.o dnt.o dchisq.o pchisq.o qchisq.o rchisq.o rnchisq.o dbinom.o pbinom.o qbinom.o rbinom.o rmultinom.o dcauchy.o pcauchy.o qcauchy.o rcauchy.o dexp.o pexp.o qexp.o rexp.o dgeom.o pgeom.o qgeom.o rgeom.o dhyper.o phyper.o qhyper.o rhyper.o dnbinom.o pnbinom.o qnbinom.o rnbinom.o dpois.o ppois.o qpois.o rpois.o dweibull.o pweibull.o qweibull.o rweibull.o dlogis.o plogis.o qlogis.o rlogis.o dnchisq.o pnchisq.o qnchisq.o dnbeta.o pnbeta.o qnbeta.o pnf.o pnt.o qnf.o qnt.o ptukey.o qtukey.o toms708.o wilcox.o signrank.o ranlib libnmath.a make[4]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/nmath' make[3]: *** No rule to make target ../extra/tre/libtre.a', needed by libR.so'. Stop. make[3]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/main' make[2]: *** [R] Error 2 make[2]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/main' make[1]: *** [R] Error 1 make[1]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src' make: *** [R] Error 1 I read in here [URL] that the library isn't supposed to make libtre.a and that they assume the file they are looking for is libtre.so I therefore built the libtre and linked libtre.so from the library to src/extra/tre/libtre.a and now the make "make -n" now fails with: if test -f ./NAMESPACE; then \ /usr/bin/install -c -m 644 ./NAMESPACE ../../../library/compiler; \ fi rm -f ../../../library/compiler/Meta/nsInfo.rds if test -f DESCRIPTION; then \ if test "" != ""; then \ echo "tools:::.install_package_description('.', '../../../library/compiler', '')" | \ R_DEFAULT_PACKAGES=NULL R_ENABLE_JIT=0 ../../../bin/R --vanilla --slave > /dev/null ; \ else \ echo "tools:::.install_package_description('.', '../../../library/compiler')" | \ R_DEFAULT_PACKAGES=NULL R_ENABLE_JIT=0 ../../../bin/R --vanilla --slave > /dev/null ; \ fi; \ fi make[4]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/library/compiler' make mklazycomp make[4]: Entering directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/library/compiler' make[4]: *** No rule to make target all.R', needed by ../../../library/compiler/R/compiler.rdb'. Stop. make[4]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/library/compiler' make[3]: *** [all] Error 2 make[3]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/library/compiler' make[2]: *** [R] Error 1 make[2]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src/library' make[1]: *** [R] Error 1 make[1]: Leaving directory /data/mbp15ja/R-3.4.1/R-3.4.1/src' make: *** [R] Error 1 I haven't seen anything regarding all.R and ../../../library/compiler/R/compiler.rdb Thanks in advance!
r, gcc, redhat
2
127
0
https://stackoverflow.com/questions/45761813/no-rule-to-make-target-all-r-needed-by-compiler-rdb-r-3-4-1-from-source-scient
44,840,139
&quot;org.apache.http.ConnectionClosedException:&quot; Error occurred while running jmeter
I have recorded reports in my application and running scripts for JMeter 3.0 in RedHat server. But, a ConnectionClosedException has error occurred as shown below Error: org.apache.http.ConnectionClosedException: Premature end of chunk coded message body: closing chunk expected at org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:268) at org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:227) at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:186) at java.util.zip.InflaterInputStream.fill(Unknown Source) at java.util.zip.InflaterInputStream.read(Unknown Source) at java.util.zip.GZIPInputStream.read(Unknown Source) at org.apache.http.client.entity.LazyDecompressingInputStream.read(LazyDecompressingInputStream.java:73) at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137) at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:150) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.readResponse(HTTPSamplerBase.java:1779) at org.apache.jmeter.protocol.http.sampler.HTTPAbstractImpl.readResponse(HTTPAbstractImpl.java:412) at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:400) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:74) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1146) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1135) at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:465) at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:410) at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:241) at java.lang.Thread.run(Unknown Source) Error: Number format exception for input string: "" error showing in Assertion result window. Does anyone know a solution for this exception?
&quot;org.apache.http.ConnectionClosedException:&quot; Error occurred while running jmeter I have recorded reports in my application and running scripts for JMeter 3.0 in RedHat server. But, a ConnectionClosedException has error occurred as shown below Error: org.apache.http.ConnectionClosedException: Premature end of chunk coded message body: closing chunk expected at org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:268) at org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:227) at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:186) at java.util.zip.InflaterInputStream.fill(Unknown Source) at java.util.zip.InflaterInputStream.read(Unknown Source) at java.util.zip.GZIPInputStream.read(Unknown Source) at org.apache.http.client.entity.LazyDecompressingInputStream.read(LazyDecompressingInputStream.java:73) at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137) at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:150) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.readResponse(HTTPSamplerBase.java:1779) at org.apache.jmeter.protocol.http.sampler.HTTPAbstractImpl.readResponse(HTTPAbstractImpl.java:412) at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:400) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:74) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1146) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1135) at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:465) at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:410) at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:241) at java.lang.Thread.run(Unknown Source) Error: Number format exception for input string: "" error showing in Assertion result window. Does anyone know a solution for this exception?
jmeter, redhat
2
2,589
1
https://stackoverflow.com/questions/44840139/org-apache-http-connectionclosedexception-error-occurred-while-running-jmeter
44,735,590
How to keep alive spring-boot application even after session timeout
I have a Linux machine hosted in the remote environment. The details of that machine are as follows: LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: RedHatEnterpriseServer Description: Red Hat Enterprise Linux Server release 7.2 (Maipo) Release: 7.2 Codename: Maipo I am running Spring-Boot service on that machine using mvn spring-boot:run I am using putty to connect and execute the commands on that machine. My problem is running the service continuously. If My Windows system is connected to the internet and my PUTTY session is on the remote service went fine but as soon as my session timeout the remote service stops executing. Is there any way I can keep that service alive full-time.
How to keep alive spring-boot application even after session timeout I have a Linux machine hosted in the remote environment. The details of that machine are as follows: LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: RedHatEnterpriseServer Description: Red Hat Enterprise Linux Server release 7.2 (Maipo) Release: 7.2 Codename: Maipo I am running Spring-Boot service on that machine using mvn spring-boot:run I am using putty to connect and execute the commands on that machine. My problem is running the service continuously. If My Windows system is connected to the internet and my PUTTY session is on the remote service went fine but as soon as my session timeout the remote service stops executing. Is there any way I can keep that service alive full-time.
linux, maven, spring-boot, redhat
2
1,604
1
https://stackoverflow.com/questions/44735590/how-to-keep-alive-spring-boot-application-even-after-session-timeout
44,704,010
Expect spawn id not open
In a short note, I work on a tool which kicks of backup scripts written in the shell which in turn uses EXPECT to connect to remote servers and execute the shell script. I get below error for either a long-running expect-jobs or when multiple jobs are being kicked off with different values of arguments. expect: spawn id exp6 not open while executing "expect "*>" { send "exit\r" }" (file "/oraadmin/ettool/upgradescripts/expct_orig_scripts/expdp_expct.sh" line 25) Though the script is coming out from the machine I execute, it still runs on the remote server. Here is the code: set SoDb [lrange $argv 4 4] set SoCl [lrange $argv 5 5] set x [lrange $argv 6 6] set THEDATE [lrange $argv 7 7 ] set timeout -1 #echo $scn_source spawn ssh -q [lindex $argv 1]@[lindex $argv 0] log_user 0 expect "yes/no" { send "yes\r" expect -re "(.*)assword:" { sleep 5; send "[lindex $argv 2]\r" } } -re "(.*)assword:" { sleep 5; send "[lindex $argv 2]\r" } expect "*>" { send "sudo su - [lindex $argv 3]\r" } sleep 5 expect -re "(.*)assword:" { sleep 5;send "[lindex $argv 2]\r" expect "*]" { send " /orashare/ettool/expdp.sh ${SoDb} ${SoCl} $x $THEDATE\r" } } "*]" { send " /orashare/ettool/expdp.sh ${SoDb} ${SoCl} $x $THEDATE\r" } log_user 1 expect "*]" { send "exit\r" } expect "*>" { send "exit\r" } expect eof I couldn't find an exact situation in any of the stack overflow threads, please do assist.
Expect spawn id not open In a short note, I work on a tool which kicks of backup scripts written in the shell which in turn uses EXPECT to connect to remote servers and execute the shell script. I get below error for either a long-running expect-jobs or when multiple jobs are being kicked off with different values of arguments. expect: spawn id exp6 not open while executing "expect "*>" { send "exit\r" }" (file "/oraadmin/ettool/upgradescripts/expct_orig_scripts/expdp_expct.sh" line 25) Though the script is coming out from the machine I execute, it still runs on the remote server. Here is the code: set SoDb [lrange $argv 4 4] set SoCl [lrange $argv 5 5] set x [lrange $argv 6 6] set THEDATE [lrange $argv 7 7 ] set timeout -1 #echo $scn_source spawn ssh -q [lindex $argv 1]@[lindex $argv 0] log_user 0 expect "yes/no" { send "yes\r" expect -re "(.*)assword:" { sleep 5; send "[lindex $argv 2]\r" } } -re "(.*)assword:" { sleep 5; send "[lindex $argv 2]\r" } expect "*>" { send "sudo su - [lindex $argv 3]\r" } sleep 5 expect -re "(.*)assword:" { sleep 5;send "[lindex $argv 2]\r" expect "*]" { send " /orashare/ettool/expdp.sh ${SoDb} ${SoCl} $x $THEDATE\r" } } "*]" { send " /orashare/ettool/expdp.sh ${SoDb} ${SoCl} $x $THEDATE\r" } log_user 1 expect "*]" { send "exit\r" } expect "*>" { send "exit\r" } expect eof I couldn't find an exact situation in any of the stack overflow threads, please do assist.
linux, shell, unix, redhat, expect
2
1,736
0
https://stackoverflow.com/questions/44704010/expect-spawn-id-not-open
44,643,636
How to install netbeans in redhat 7
I have installed redhat-7 on vmware . I can installed and access netbeans-8.1 in it using xen software . But When I login via remote desktop I can't access it . The full window is shown as black page . Here is the screenshot . I have logged in redhat server via remote desktop . I cant install netbeans in it . When I try to install redhat the above screen is shown . I have installed eclipse software in it . Eclipse runs in it very fine . The problem is solved if I use xencenter software . But I have to use redhat pc via remote desktop . How can I solve this problem ? Please help me .
How to install netbeans in redhat 7 I have installed redhat-7 on vmware . I can installed and access netbeans-8.1 in it using xen software . But When I login via remote desktop I can't access it . The full window is shown as black page . Here is the screenshot . I have logged in redhat server via remote desktop . I cant install netbeans in it . When I try to install redhat the above screen is shown . I have installed eclipse software in it . Eclipse runs in it very fine . The problem is solved if I use xencenter software . But I have to use redhat pc via remote desktop . How can I solve this problem ? Please help me .
java, netbeans, redhat, netbeans-8
2
161
0
https://stackoverflow.com/questions/44643636/how-to-install-netbeans-in-redhat-7
44,548,661
Pip install significantly slower on Python 3.5 versus 2.7 (RHEL)
I'm currently running Red Hat 7.3 and installed Python 3.5 from the SCL (www.softwarecollections.org/en/scls/rhscl/rh-python35/). When I attempt to pip install C intensive packages such as numpy and pandas, the install process on Python 3.5 is taking significantly longer than when I attempt to install the same packages in the native Python 2.7 installation (6 minutes per package versus ~10 seconds). I have some automated processes that are building and rebuilding virtual environments on a frequent basis, so this is having a huge impact on the overall performance. Does anyone know why these installations are taking significantly longer in Python 3.5? Any help would be greatly appreciated. Here's a snippet of the 'pip install numpy -v' on both versions. The obvious thing that jumps out at me is the GCC building that occurs in 3.5 and not in 2.7 but I'm not sure why... Native Python 2.7: Looking up "[URL] in the cache Current age based on date: 5291 Freshness lifetime from max-age: 31557600 The response is "fresh", returning cached response 31557600 > 5291 Using cached numpy-1.13.0-cp27-cp27mu-manylinux1_x86_64.whl Downloading from URL [URL] (from [URL] Installing collected packages: numpy Successfully installed numpy-1.13.0 Cleaning up... SCL Python 3.5: ... LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar2_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:727:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_not_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:675:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar1_not_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:703:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar2_not_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:727:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_less_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:675:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar1_less_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:703:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar2_less_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:727:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_less_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:675:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar1_less_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:703:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar2_less_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:727:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_greater_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:675:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar1_greater_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:703:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar2_greater_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:727:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_greater_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:675:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar1_greater_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:703:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar2_greater_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:727:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_sqrt_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:753:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:759:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_absolute_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:804:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:810:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_negative_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:804:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:810:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ In file included from numpy/core/src/umath/loops.c.src:39:0: numpy/core/src/umath/simd.inc.src: In function ‘sse2_maximum_DOUBLE’: numpy/core/src/umath/simd.inc.src:836:24: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (i + 3 * stride <= n) { ^ In file included from numpy/core/src/umath/loops.c.src:39:0: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:844:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 32) { ^ In file included from numpy/core/src/umath/loops.c.src:39:0: numpy/core/src/umath/simd.inc.src: In function ‘sse2_minimum_DOUBLE’: numpy/core/src/umath/simd.inc.src:836:24: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (i + 3 * stride <= n) { ^ In file included from numpy/core/src/umath/loops.c.src:39:0: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:844:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 32) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_logical_or_BOOL’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:910:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_reduce_logical_or_BOOL’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:942:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(npy_bool, 32) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_logical_and_BOOL’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:910:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_reduce_logical_and_BOOL’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:942:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(npy_bool, 32) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_absolute_BOOL’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:984:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_logical_not_BOOL’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:984:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_FLOAT’: numpy/core/src/umath/loops.c.src:1635:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i++) { ^ numpy/core/src/umath/loops.c.src:1658:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:1676:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i++) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_DOUBLE’: numpy/core/src/umath/loops.c.src:1635:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i++) { ^ numpy/core/src/umath/loops.c.src:1658:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:1676:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i++) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_LONGDOUBLE’: numpy/core/src/umath/loops.c.src:1635:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i++) { ^ numpy/core/src/umath/loops.c.src:1658:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:1676:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i++) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_HALF’: numpy/core/src/umath/loops.c.src:1635:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i++) { ^ numpy/core/src/umath/loops.c.src:1658:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:1676:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i++) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_CFLOAT’: numpy/core/src/umath/loops.c.src:2410:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i += 2) { ^ numpy/core/src/umath/loops.c.src:2434:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:2452:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i+=2) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_CDOUBLE’: numpy/core/src/umath/loops.c.src:2410:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i += 2) { ^ numpy/core/src/umath/loops.c.src:2434:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:2452:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i+=2) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_CLONGDOUBLE’: numpy/core/src/umath/loops.c.src:2410:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i += 2) { ^ numpy/core/src/umath/loops.c.src:2434:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:2452:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i+=2) { ^ gcc: numpy/core/src/umath/umathmodule.c gcc: numpy/core/src/umath/reduction.c gcc: numpy/core/src/private/mem_overlap.c numpy/core/src/private/mem_overlap.c: In function ‘diophantine_dfs’: numpy/core/src/private/mem_overlap.c:420:31: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (j = 0; j < n; ++j) { ^ numpy/core/src/private/mem_overlap.c: In function ‘strides_to_terms’: numpy/core/src/private/mem_overlap.c:715:19: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < PyArray_NDIM(arr); ++i) { ^ numpy/core/src/private/mem_overlap.c: In function ‘solve_may_share_memory’: numpy/core/src/private/mem_overlap.c:801:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (rhs != (npy_uintp)rhs) { ^ numpy/core/src/private/mem_overlap.c: In function ‘solve_may_have_internal_overlap’: numpy/core/src/private/mem_overlap.c:890:19: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (j = 0; j < nterms; ++j) { ^ numpy/core/src/private/mem_overlap.c:908:19: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (j = 0; j < nterms; ++j) { ^ gcc: numpy/core/src/umath/ufunc_type_resolution.c gcc: numpy/core/src/umath/ufunc_object.c numpy/core/src/umath/ufunc_object.c: In function ‘PyUFunc_GenericReduction’: numpy/core/src/umath/ufunc_object.c:3897:15: warning: unused variable ‘out_obj’ [-Wunused-variable] PyObject *out_obj = NULL; ^ gcc: numpy/core/src/private/ufunc_override.c gcc -pthread -shared -L/opt/rh/rh-python35/root/usr/lib64-Wl,-z,relro build/temp.linux-x86_64-3.5/numpy/core/src/umath/umathmodule.o build/temp.linux-x86_64-3.5/numpy/core/src/umath/reduction.o build/temp.linux-x86_64-3.5/build/src.linux-x86_64-3.5/numpy/core/src/umath/loops.o build/temp.linux-x86_64-3.5/numpy/core/src/umath/ufunc_object.o build/temp.linux-x86_64-3.5/build/src.linux-x86_64-3.5/numpy/core/src/umath/scalarmath.o build/temp.linux-x86_64-3.5/numpy/core/src/umath/ufunc_type_resolution.o build/temp.linux-x86_64-3.5/numpy/core/src/umath/override.o build/temp.linux-x86_64-3.5/numpy/core/src/private/mem_overlap.o build/temp.linux-x86_64-3.5/numpy/core/src/private/ufunc_override.o -L/opt/rh/rh-python35/root/usr/lib64 -Lbuild/temp.linux-x86_64-3.5 -lnpymath -lm -lpython3.5m -o build/lib.linux-x86_64-3.5/numpy/core/umath.cpython-35m-x86_64-linux-gnu.so building 'numpy.core.umath_tests' extension compiling C sources C compiler: gcc -pthread -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -I/opt/rh/rh-python35/root/usr/include -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.linux-x86_64-3.5/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/opt/rh/rh-python35/root/usr/include/python3.5m -Ibuild/src.linux-x86_64-3.5/numpy/core/src/private -Ibuild/src.linux-x86_64-3.5/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.5/numpy/core/src/private -Ibuild/src.linux-x86_64-3.5/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.5/numpy/core/src/private -Ibuild/src.linux-x86_64-3.5/numpy/core/src/npymath -c' gcc: build/src.linux-x86_64-3.5/numpy/core/src/umath/umath_tests.c gcc -pthread -shared -L/opt/rh/rh-python35/root/usr/lib64-Wl,-z,relro build/temp.linux-x86_64-3.5/build/src.linux-x86_64-3.5/numpy/core/src/umath/umath_tests.o -L/opt/rh/rh-python35/root/usr/lib64 -Lbuild/temp.linux-x86_64-3.5 -lpython3.5m -o build/lib.linux-x86_64-3.5/numpy/core/umath_tests.cpython-35m-x86_64-linux-gnu.so building 'numpy.core.test_rational' extension compiling C sources C compiler: gcc -pthread -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -I/opt/rh/rh-python35/root/usr/include -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.linux-x86_64-3.5/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/opt/rh/rh-python35/root/usr/include/python3.5m -Ibuild/src.linux-x86_64-3.5/numpy/core/src/private -Ibuild/src.linux-x86_64-3.5/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.5/numpy/core/src/private -Ibuild/src.linux-x86_64-3.5/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.5/numpy/core/src/private -Ibuild/src.linux-x86_64-3.5/numpy/core/src/npymath -c' gcc: build/src.linux-x86_64-3.5/numpy/core/src/umath/test_rational.c gcc -pthread -shared -L/opt/rh/rh-python35/root/usr/lib64-Wl,-z,relro build/temp.linux-x86_64-3.5/build/src.linux-x86_64-3.5/numpy/core/src/umath/test_rational.o -L/opt/rh/rh-python35/root/usr/lib64 -Lbuild/temp.linux-x86_64-3.5 -lpython3.5m -o build/lib.linux-x86_64-3.5/numpy/core/test_rational.cpython-35m-x86_64-linux-gnu.so building 'numpy.core.struct_ufunc_test' extension compiling C sources C compiler: gcc -pthread -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -I/opt/rh/rh-python35/root/usr/include -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC ... Removing source in /tmp/pip-build-12s9oqxb/numpy Successfully installed numpy-1.13.0 Cleaning up...
Pip install significantly slower on Python 3.5 versus 2.7 (RHEL) I'm currently running Red Hat 7.3 and installed Python 3.5 from the SCL (www.softwarecollections.org/en/scls/rhscl/rh-python35/). When I attempt to pip install C intensive packages such as numpy and pandas, the install process on Python 3.5 is taking significantly longer than when I attempt to install the same packages in the native Python 2.7 installation (6 minutes per package versus ~10 seconds). I have some automated processes that are building and rebuilding virtual environments on a frequent basis, so this is having a huge impact on the overall performance. Does anyone know why these installations are taking significantly longer in Python 3.5? Any help would be greatly appreciated. Here's a snippet of the 'pip install numpy -v' on both versions. The obvious thing that jumps out at me is the GCC building that occurs in 3.5 and not in 2.7 but I'm not sure why... Native Python 2.7: Looking up "[URL] in the cache Current age based on date: 5291 Freshness lifetime from max-age: 31557600 The response is "fresh", returning cached response 31557600 > 5291 Using cached numpy-1.13.0-cp27-cp27mu-manylinux1_x86_64.whl Downloading from URL [URL] (from [URL] Installing collected packages: numpy Successfully installed numpy-1.13.0 Cleaning up... SCL Python 3.5: ... LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar2_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:727:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_not_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:675:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar1_not_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:703:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar2_not_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:727:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_less_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:675:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar1_less_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:703:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar2_less_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:727:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_less_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:675:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar1_less_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:703:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar2_less_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:727:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_greater_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:675:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar1_greater_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:703:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar2_greater_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:727:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_greater_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:675:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar1_greater_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:703:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_scalar2_greater_equal_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:727:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 64) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_sqrt_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:753:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:759:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_absolute_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:804:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:810:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_negative_DOUBLE’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:804:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:810:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ In file included from numpy/core/src/umath/loops.c.src:39:0: numpy/core/src/umath/simd.inc.src: In function ‘sse2_maximum_DOUBLE’: numpy/core/src/umath/simd.inc.src:836:24: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (i + 3 * stride <= n) { ^ In file included from numpy/core/src/umath/loops.c.src:39:0: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:844:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 32) { ^ In file included from numpy/core/src/umath/loops.c.src:39:0: numpy/core/src/umath/simd.inc.src: In function ‘sse2_minimum_DOUBLE’: numpy/core/src/umath/simd.inc.src:836:24: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (i + 3 * stride <= n) { ^ In file included from numpy/core/src/umath/loops.c.src:39:0: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:844:9: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 32) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_logical_or_BOOL’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:910:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_reduce_logical_or_BOOL’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:942:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(npy_bool, 32) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_binary_logical_and_BOOL’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:910:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_reduce_logical_and_BOOL’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:942:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(npy_bool, 32) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_absolute_BOOL’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:984:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/simd.inc.src: In function ‘sse2_logical_not_BOOL’: numpy/core/src/umath/simd.inc.src:107:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for(; i < npy_blocked_end(peel, sizeof(type), vsize, n);\ ^ numpy/core/src/umath/simd.inc.src:984:5: note: in expansion of macro ‘LOOP_BLOCKED’ LOOP_BLOCKED(@type@, 16) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_FLOAT’: numpy/core/src/umath/loops.c.src:1635:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i++) { ^ numpy/core/src/umath/loops.c.src:1658:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:1676:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i++) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_DOUBLE’: numpy/core/src/umath/loops.c.src:1635:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i++) { ^ numpy/core/src/umath/loops.c.src:1658:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:1676:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i++) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_LONGDOUBLE’: numpy/core/src/umath/loops.c.src:1635:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i++) { ^ numpy/core/src/umath/loops.c.src:1658:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:1676:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i++) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_HALF’: numpy/core/src/umath/loops.c.src:1635:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i++) { ^ numpy/core/src/umath/loops.c.src:1658:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:1676:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i++) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_CFLOAT’: numpy/core/src/umath/loops.c.src:2410:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i += 2) { ^ numpy/core/src/umath/loops.c.src:2434:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:2452:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i+=2) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_CDOUBLE’: numpy/core/src/umath/loops.c.src:2410:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i += 2) { ^ numpy/core/src/umath/loops.c.src:2434:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:2452:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i+=2) { ^ numpy/core/src/umath/loops.c.src: In function ‘pairwise_sum_CLONGDOUBLE’: numpy/core/src/umath/loops.c.src:2410:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < n; i += 2) { ^ numpy/core/src/umath/loops.c.src:2434:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 8; i < n - (n % 8); i += 8) { ^ numpy/core/src/umath/loops.c.src:2452:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (; i < n; i+=2) { ^ gcc: numpy/core/src/umath/umathmodule.c gcc: numpy/core/src/umath/reduction.c gcc: numpy/core/src/private/mem_overlap.c numpy/core/src/private/mem_overlap.c: In function ‘diophantine_dfs’: numpy/core/src/private/mem_overlap.c:420:31: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (j = 0; j < n; ++j) { ^ numpy/core/src/private/mem_overlap.c: In function ‘strides_to_terms’: numpy/core/src/private/mem_overlap.c:715:19: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (i = 0; i < PyArray_NDIM(arr); ++i) { ^ numpy/core/src/private/mem_overlap.c: In function ‘solve_may_share_memory’: numpy/core/src/private/mem_overlap.c:801:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (rhs != (npy_uintp)rhs) { ^ numpy/core/src/private/mem_overlap.c: In function ‘solve_may_have_internal_overlap’: numpy/core/src/private/mem_overlap.c:890:19: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (j = 0; j < nterms; ++j) { ^ numpy/core/src/private/mem_overlap.c:908:19: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (j = 0; j < nterms; ++j) { ^ gcc: numpy/core/src/umath/ufunc_type_resolution.c gcc: numpy/core/src/umath/ufunc_object.c numpy/core/src/umath/ufunc_object.c: In function ‘PyUFunc_GenericReduction’: numpy/core/src/umath/ufunc_object.c:3897:15: warning: unused variable ‘out_obj’ [-Wunused-variable] PyObject *out_obj = NULL; ^ gcc: numpy/core/src/private/ufunc_override.c gcc -pthread -shared -L/opt/rh/rh-python35/root/usr/lib64-Wl,-z,relro build/temp.linux-x86_64-3.5/numpy/core/src/umath/umathmodule.o build/temp.linux-x86_64-3.5/numpy/core/src/umath/reduction.o build/temp.linux-x86_64-3.5/build/src.linux-x86_64-3.5/numpy/core/src/umath/loops.o build/temp.linux-x86_64-3.5/numpy/core/src/umath/ufunc_object.o build/temp.linux-x86_64-3.5/build/src.linux-x86_64-3.5/numpy/core/src/umath/scalarmath.o build/temp.linux-x86_64-3.5/numpy/core/src/umath/ufunc_type_resolution.o build/temp.linux-x86_64-3.5/numpy/core/src/umath/override.o build/temp.linux-x86_64-3.5/numpy/core/src/private/mem_overlap.o build/temp.linux-x86_64-3.5/numpy/core/src/private/ufunc_override.o -L/opt/rh/rh-python35/root/usr/lib64 -Lbuild/temp.linux-x86_64-3.5 -lnpymath -lm -lpython3.5m -o build/lib.linux-x86_64-3.5/numpy/core/umath.cpython-35m-x86_64-linux-gnu.so building 'numpy.core.umath_tests' extension compiling C sources C compiler: gcc -pthread -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -I/opt/rh/rh-python35/root/usr/include -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.linux-x86_64-3.5/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/opt/rh/rh-python35/root/usr/include/python3.5m -Ibuild/src.linux-x86_64-3.5/numpy/core/src/private -Ibuild/src.linux-x86_64-3.5/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.5/numpy/core/src/private -Ibuild/src.linux-x86_64-3.5/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.5/numpy/core/src/private -Ibuild/src.linux-x86_64-3.5/numpy/core/src/npymath -c' gcc: build/src.linux-x86_64-3.5/numpy/core/src/umath/umath_tests.c gcc -pthread -shared -L/opt/rh/rh-python35/root/usr/lib64-Wl,-z,relro build/temp.linux-x86_64-3.5/build/src.linux-x86_64-3.5/numpy/core/src/umath/umath_tests.o -L/opt/rh/rh-python35/root/usr/lib64 -Lbuild/temp.linux-x86_64-3.5 -lpython3.5m -o build/lib.linux-x86_64-3.5/numpy/core/umath_tests.cpython-35m-x86_64-linux-gnu.so building 'numpy.core.test_rational' extension compiling C sources C compiler: gcc -pthread -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -I/opt/rh/rh-python35/root/usr/include -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.linux-x86_64-3.5/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/opt/rh/rh-python35/root/usr/include/python3.5m -Ibuild/src.linux-x86_64-3.5/numpy/core/src/private -Ibuild/src.linux-x86_64-3.5/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.5/numpy/core/src/private -Ibuild/src.linux-x86_64-3.5/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.5/numpy/core/src/private -Ibuild/src.linux-x86_64-3.5/numpy/core/src/npymath -c' gcc: build/src.linux-x86_64-3.5/numpy/core/src/umath/test_rational.c gcc -pthread -shared -L/opt/rh/rh-python35/root/usr/lib64-Wl,-z,relro build/temp.linux-x86_64-3.5/build/src.linux-x86_64-3.5/numpy/core/src/umath/test_rational.o -L/opt/rh/rh-python35/root/usr/lib64 -Lbuild/temp.linux-x86_64-3.5 -lpython3.5m -o build/lib.linux-x86_64-3.5/numpy/core/test_rational.cpython-35m-x86_64-linux-gnu.so building 'numpy.core.struct_ufunc_test' extension compiling C sources C compiler: gcc -pthread -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -I/opt/rh/rh-python35/root/usr/include -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC ... Removing source in /tmp/pip-build-12s9oqxb/numpy Successfully installed numpy-1.13.0 Cleaning up...
python, python-2.7, python-3.x, numpy, redhat
2
315
1
https://stackoverflow.com/questions/44548661/pip-install-significantly-slower-on-python-3-5-versus-2-7-rhel
44,429,454
Drools: how to get the facts and globals in the knowledge base after a REST call to fireAllRules?
I'm working on a project with Red Hat Drools deployed on a WAS 8.5.9 on windows and I'm trying to figure out how to get the facts and the globals in the knowledge base after I fire the rules in my session with a REST call. The REST API I'm using is: [POST] [URL] Where "TargaRuleContainer" is the id of my container. Here is the code of my application: kmodule.xml <?xml version="1.0" encoding="UTF-8"?> <kmodule xmlns="[URL] <kbase name="targa" packages="com.sample"> <ksession name="targaSession" type="stateless"> </ksession> </kbase> </kmodule> Rule file: package com.sample import com.sample.targaRule.model.Targa; import static com.sample.targaRule.util.Utils.targaPari; global java.lang.String city rule "Targa default" salience -10 activation-group "targaActivation" ruleflow-group "targa" when then city = "No city"; System.out.println("No City!"); end rule "Targa OK Milano" salience 10 activation-group "targaActivation" ruleflow-group "targa" when t : Targa( targaPari(targa) ) then t.setCity("Milano"); city = "Milano"; System.out.println("Milano!!"); update(t) end rule "Targa NO Milano" salience 10 activation-group "targaActivation" ruleflow-group "targa" when t : Targa( !targaPari(targa) ) then t.setCity("Roma"); city = "Roma"; System.out.println("Roma!!"); update(t) end POJO class: package com.sample.targaRule.model; import java.io.Serializable; public class Targa implements Serializable{ private static final long serialVersionUID = 1L; private String nome, cognome, targa, city; public Targa() { super(); } public Targa(String nome, String cognome, String targa) { this.nome = nome; this.cognome = cognome; this.targa = targa; this.city = ""; } public Targa(String nome, String cognome, String targa, String city) { this.nome = nome; this.cognome = cognome; this.targa = targa; this.city = city; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } public String getNome() { return nome; } public void setNome(String nome) { this.nome = nome; } public String getCognome() { return cognome; } public void setCognome(String cognome) { this.cognome = cognome; } public String getTarga() { return targa; } public void setTarga(String targa) { this.targa = targa; } } Utils class: package com.sample.targaRule.util; public class Utils { public static boolean targaPari(String t) { int lastNum = Integer.parseInt(t.substring(4,5)); if (lastNum%2==0) { System.out.println("Checking plate "+t+": last number is "+lastNum+", so even"); return true; } System.out.println("Checking plate "+t+": last number is "+lastNum+", so odd"); return false; } } Here is the body of my request: { "lookup": "targaSession", "commands": [{ "insert": { "object": { "com.sample.targaRule.model.Targa": { "nome": "Donald", "cognome": "Trump", "targa": "ED222ED", "city": "" } }, "out-identifier": "Input" } }, { "fire-all-rules": { "out-identifier": "FireAllRules" } }, { "get-objects": { "out-identifier": "Output" } }, { "get-global": { "out-identifier": "Global", "identifier": "city" } } ] } Here is the response from the server: { "type" : "SUCCESS", "msg" : "Container TargaRuleContainer successfully called.", "result" : { "execution-results" : { "results" : [ { "key" : "Input", "value" : {"com.sample.targaRule.model.Targa":{ "nome" : "Donald", "cognome" : "Trump", "targa" : "ED222ED", "city" : "" }} }, { "key" : "Output", "value" : [{"com.sample.targaRule.model.Targa":{ "nome" : "Donald", "cognome" : "Trump", "targa" : "ED222ED", "city" : "" }}] }, { "key" : "FireAllRules", "value" : 0 }, { "key" : "Global" } ], "facts" : [ { "key" : "Input", "value" : {"org.drools.core.common.DefaultFactHandle":{ "external-form" : "0:1:-1324533475:-1324533475:1:DEFAULT:NON_TRAIT:com.sample.targaRule.model.Targa" }} } ] } } } The rule just does a check on the "targa" string field of the pojo class (checks if the third-to-last char is an odd or even number), and sets the other string field "city" accordingly. It also sets a string global variable named "city", and does a System.out.println(). As you can see in the response, the call is successful, but I cannot see the edits in the object I insert, neither I can see the global variable. Moreover, the System.out.println() that is in the first 2 rules consequences is not shown in the server logs, I can just see the print that is done in the Utls class static method I call in the rules conditions. I'm quite sure I am doing something wrong in the request syntax, but I'm having a hard time finding some example online, as the Red Hat documentation is not that verbose. EDIT1: I figured out the problem with the output of the POJO field "city". The problem was the "ruleflow-group" parameter set in the rules body (I need that cause I'm using the same rules within a jbpm process too). I solved by inserting the "auto-focus" parameter in the rules. For what concerns the global variable, doing some reverse engineering I found out that (probably) globals are admitted only in stateful sessions, so I changed my kmodule.xml accordingly, but still I cannot get the global variable "city" in output.
Drools: how to get the facts and globals in the knowledge base after a REST call to fireAllRules? I'm working on a project with Red Hat Drools deployed on a WAS 8.5.9 on windows and I'm trying to figure out how to get the facts and the globals in the knowledge base after I fire the rules in my session with a REST call. The REST API I'm using is: [POST] [URL] Where "TargaRuleContainer" is the id of my container. Here is the code of my application: kmodule.xml <?xml version="1.0" encoding="UTF-8"?> <kmodule xmlns="[URL] <kbase name="targa" packages="com.sample"> <ksession name="targaSession" type="stateless"> </ksession> </kbase> </kmodule> Rule file: package com.sample import com.sample.targaRule.model.Targa; import static com.sample.targaRule.util.Utils.targaPari; global java.lang.String city rule "Targa default" salience -10 activation-group "targaActivation" ruleflow-group "targa" when then city = "No city"; System.out.println("No City!"); end rule "Targa OK Milano" salience 10 activation-group "targaActivation" ruleflow-group "targa" when t : Targa( targaPari(targa) ) then t.setCity("Milano"); city = "Milano"; System.out.println("Milano!!"); update(t) end rule "Targa NO Milano" salience 10 activation-group "targaActivation" ruleflow-group "targa" when t : Targa( !targaPari(targa) ) then t.setCity("Roma"); city = "Roma"; System.out.println("Roma!!"); update(t) end POJO class: package com.sample.targaRule.model; import java.io.Serializable; public class Targa implements Serializable{ private static final long serialVersionUID = 1L; private String nome, cognome, targa, city; public Targa() { super(); } public Targa(String nome, String cognome, String targa) { this.nome = nome; this.cognome = cognome; this.targa = targa; this.city = ""; } public Targa(String nome, String cognome, String targa, String city) { this.nome = nome; this.cognome = cognome; this.targa = targa; this.city = city; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } public String getNome() { return nome; } public void setNome(String nome) { this.nome = nome; } public String getCognome() { return cognome; } public void setCognome(String cognome) { this.cognome = cognome; } public String getTarga() { return targa; } public void setTarga(String targa) { this.targa = targa; } } Utils class: package com.sample.targaRule.util; public class Utils { public static boolean targaPari(String t) { int lastNum = Integer.parseInt(t.substring(4,5)); if (lastNum%2==0) { System.out.println("Checking plate "+t+": last number is "+lastNum+", so even"); return true; } System.out.println("Checking plate "+t+": last number is "+lastNum+", so odd"); return false; } } Here is the body of my request: { "lookup": "targaSession", "commands": [{ "insert": { "object": { "com.sample.targaRule.model.Targa": { "nome": "Donald", "cognome": "Trump", "targa": "ED222ED", "city": "" } }, "out-identifier": "Input" } }, { "fire-all-rules": { "out-identifier": "FireAllRules" } }, { "get-objects": { "out-identifier": "Output" } }, { "get-global": { "out-identifier": "Global", "identifier": "city" } } ] } Here is the response from the server: { "type" : "SUCCESS", "msg" : "Container TargaRuleContainer successfully called.", "result" : { "execution-results" : { "results" : [ { "key" : "Input", "value" : {"com.sample.targaRule.model.Targa":{ "nome" : "Donald", "cognome" : "Trump", "targa" : "ED222ED", "city" : "" }} }, { "key" : "Output", "value" : [{"com.sample.targaRule.model.Targa":{ "nome" : "Donald", "cognome" : "Trump", "targa" : "ED222ED", "city" : "" }}] }, { "key" : "FireAllRules", "value" : 0 }, { "key" : "Global" } ], "facts" : [ { "key" : "Input", "value" : {"org.drools.core.common.DefaultFactHandle":{ "external-form" : "0:1:-1324533475:-1324533475:1:DEFAULT:NON_TRAIT:com.sample.targaRule.model.Targa" }} } ] } } } The rule just does a check on the "targa" string field of the pojo class (checks if the third-to-last char is an odd or even number), and sets the other string field "city" accordingly. It also sets a string global variable named "city", and does a System.out.println(). As you can see in the response, the call is successful, but I cannot see the edits in the object I insert, neither I can see the global variable. Moreover, the System.out.println() that is in the first 2 rules consequences is not shown in the server logs, I can just see the print that is done in the Utls class static method I call in the rules conditions. I'm quite sure I am doing something wrong in the request syntax, but I'm having a hard time finding some example online, as the Red Hat documentation is not that verbose. EDIT1: I figured out the problem with the output of the POJO field "city". The problem was the "ruleflow-group" parameter set in the rules body (I need that cause I'm using the same rules within a jbpm process too). I solved by inserting the "auto-focus" parameter in the rules. For what concerns the global variable, doing some reverse engineering I found out that (probably) globals are admitted only in stateful sessions, so I changed my kmodule.xml accordingly, but still I cannot get the global variable "city" in output.
java, windows, websphere, drools, redhat
2
984
0
https://stackoverflow.com/questions/44429454/drools-how-to-get-the-facts-and-globals-in-the-knowledge-base-after-a-rest-call
44,324,898
How to check if process is running on Red Hat Linux?
I've been using a modified class I found to check if another instance of the same process is already running, the problem is that the method of checking for the process adds another instance of the same process. When my application starts, a new process ID is created and is visible with: ps -A | grep "AppName" With this I get a since entry returned, I then check for another instance of the application using: QString strCMD = "ps -A | grep \"" + mcstrAppName + "\""; QProcess objProc; objProc.start("bash", QStringList() << "-c" << strCMD); if ( objProc.waitForStarted() != true || objProc.waitForFinished() != true ) { mcpobjApp->exit(cleanExit(-1, "Unable to determine if another instance is running!")); return; } As soon as the 'start' method is called another instance of the same application appears in the process table, again, verified with: ps -A | grep "AppName" Two entries now appear each with a different PID. I've also tried: QString strOptions = "-A | grep \"" + mcstrAppName + "\""; QProcess objProc; objProc.start("ps", QStringList() << strOptions); The result is the same, two entries in process table. Is there a way to check the process table for another instance without adding an additional instance?
How to check if process is running on Red Hat Linux? I've been using a modified class I found to check if another instance of the same process is already running, the problem is that the method of checking for the process adds another instance of the same process. When my application starts, a new process ID is created and is visible with: ps -A | grep "AppName" With this I get a since entry returned, I then check for another instance of the application using: QString strCMD = "ps -A | grep \"" + mcstrAppName + "\""; QProcess objProc; objProc.start("bash", QStringList() << "-c" << strCMD); if ( objProc.waitForStarted() != true || objProc.waitForFinished() != true ) { mcpobjApp->exit(cleanExit(-1, "Unable to determine if another instance is running!")); return; } As soon as the 'start' method is called another instance of the same application appears in the process table, again, verified with: ps -A | grep "AppName" Two entries now appear each with a different PID. I've also tried: QString strOptions = "-A | grep \"" + mcstrAppName + "\""; QProcess objProc; objProc.start("ps", QStringList() << strOptions); The result is the same, two entries in process table. Is there a way to check the process table for another instance without adding an additional instance?
c++, linux, qt, redhat
2
1,071
2
https://stackoverflow.com/questions/44324898/how-to-check-if-process-is-running-on-red-hat-linux
43,911,467
Getent group with a long name and multiple spaces - SSSD
So I'm trying to return a group but I think the string is either to long or it's just not compatible with SSSD. So backgroup is I've already tested this domain for a user and also a group e.g getent passwd user1@domain2 and I get a return. I also get the group name i'm looking for when i do groups user1@domain2 Now I need to do a getent group with this group name and it looks like this: group group1 group2 basic administrator Yes it's the name of just one group, and yes it has all these spaces. So I've tried: getent groups 'group group1 group2 basic administrator@domain2' getent groups "group group1 group2 basic administrator@domain2" Any other way I can do this? Am I missing something?
Getent group with a long name and multiple spaces - SSSD So I'm trying to return a group but I think the string is either to long or it's just not compatible with SSSD. So backgroup is I've already tested this domain for a user and also a group e.g getent passwd user1@domain2 and I get a return. I also get the group name i'm looking for when i do groups user1@domain2 Now I need to do a getent group with this group name and it looks like this: group group1 group2 basic administrator Yes it's the name of just one group, and yes it has all these spaces. So I've tried: getent groups 'group group1 group2 basic administrator@domain2' getent groups "group group1 group2 basic administrator@domain2" Any other way I can do this? Am I missing something?
linux, redhat, sssd
2
1,335
1
https://stackoverflow.com/questions/43911467/getent-group-with-a-long-name-and-multiple-spaces-sssd
43,860,825
Nano text editor not warning when opening file already open
Context : I recently started using the nano text editor on a new machine (Red Hat based). There's no GUI. I'm doing everything through the terminal. I'm using tmux to run multiple commands at once. Steps to reproduce : Open up an existing file in nano Open up that same file in nano in a separate instance (without terminating the first instance) e.g. in another tmux pane Expected output : Nano gives a warning in step 2 along the lines of this file is currently being edited by nano with PID x. Do you wish to continue? This is what I have experienced on every other machine I've used nano on Observed output : Nano executes step 2 without warning. Only when I try to save the file does it alert me that someone else has the file open. Issue : This is an issue because sometimes I: open file x in instance A modify but don't save do something else forget that I haven't saved my modifications to file x, and forget that it's still open open file x in instance B make more modifications in instance B try to save be warned that 2 instances are opened realise that I've made changes to 2 different unsaved instances of the one file
Nano text editor not warning when opening file already open Context : I recently started using the nano text editor on a new machine (Red Hat based). There's no GUI. I'm doing everything through the terminal. I'm using tmux to run multiple commands at once. Steps to reproduce : Open up an existing file in nano Open up that same file in nano in a separate instance (without terminating the first instance) e.g. in another tmux pane Expected output : Nano gives a warning in step 2 along the lines of this file is currently being edited by nano with PID x. Do you wish to continue? This is what I have experienced on every other machine I've used nano on Observed output : Nano executes step 2 without warning. Only when I try to save the file does it alert me that someone else has the file open. Issue : This is an issue because sometimes I: open file x in instance A modify but don't save do something else forget that I haven't saved my modifications to file x, and forget that it's still open open file x in instance B make more modifications in instance B try to save be warned that 2 instances are opened realise that I've made changes to 2 different unsaved instances of the one file
warnings, edit, redhat, nano
2
1,137
0
https://stackoverflow.com/questions/43860825/nano-text-editor-not-warning-when-opening-file-already-open
43,062,703
&quot;This is not a login form&quot; is being stored when updating a password in keycloak theme
At our project we got a problem with the Keycloak theme. The Keycloak theme developers placed 2 html input boxes that are hidden in the Keycloak theme: <input type="text" readonly value="this is not a login form" style="display: none;"> <input type="password" readonly value="this is not a login form" style="display: none;"> They did this to prevent the browser to fill the current password. But when you update your password, the password manager of the browsers tries to store "this is not a login form" as either username and password. Anyone an idea how to prevent the password manager to store "this is not a login form" when updating an users password or is this intentional screenshots
&quot;This is not a login form&quot; is being stored when updating a password in keycloak theme At our project we got a problem with the Keycloak theme. The Keycloak theme developers placed 2 html input boxes that are hidden in the Keycloak theme: <input type="text" readonly value="this is not a login form" style="display: none;"> <input type="password" readonly value="this is not a login form" style="display: none;"> They did this to prevent the browser to fill the current password. But when you update your password, the password manager of the browsers tries to store "this is not a login form" as either username and password. Anyone an idea how to prevent the password manager to store "this is not a login form" when updating an users password or is this intentional screenshots
redhat, keycloak
2
905
1
https://stackoverflow.com/questions/43062703/this-is-not-a-login-form-is-being-stored-when-updating-a-password-in-keycloak
42,923,763
Sorting in PostgreSQL with UTF-8
I have a funny sorting problem with a UTF-8 database in PostgreSQL 9.4 My DB settings are: Collation en_US.utf8 Character type en_US.utf8 Encoding UTF8 I create a simple table with one text column: CREATE TABLE testtable ( testfield character varying(333) COLLATE pg_catalog."en_US.utf8" ); I insert data: insert into testtable values('bla,') insert into testtable values('bla.') insert into testtable values('bla f') insert into testtable values('bla, f') insert into testtable values('blaf.') I select the data ordered: select * from testtable order by testfield asc I get this wrong order: 'bla,' <-- 'bla.' 'bla f' 'bla, f' <-- 'blaf.' When I use: select * from testtable order by convert_to(testfield, 'UTF-8') asc It is right: 'bla f' 'bla,' <-- 'bla, f' <-- 'bla.' 'blaf.' Does anybody know why? Thanks.
Sorting in PostgreSQL with UTF-8 I have a funny sorting problem with a UTF-8 database in PostgreSQL 9.4 My DB settings are: Collation en_US.utf8 Character type en_US.utf8 Encoding UTF8 I create a simple table with one text column: CREATE TABLE testtable ( testfield character varying(333) COLLATE pg_catalog."en_US.utf8" ); I insert data: insert into testtable values('bla,') insert into testtable values('bla.') insert into testtable values('bla f') insert into testtable values('bla, f') insert into testtable values('blaf.') I select the data ordered: select * from testtable order by testfield asc I get this wrong order: 'bla,' <-- 'bla.' 'bla f' 'bla, f' <-- 'blaf.' When I use: select * from testtable order by convert_to(testfield, 'UTF-8') asc It is right: 'bla f' 'bla,' <-- 'bla, f' <-- 'bla.' 'blaf.' Does anybody know why? Thanks.
utf-8, redhat, postgresql-9.4
2
1,146
0
https://stackoverflow.com/questions/42923763/sorting-in-postgresql-with-utf-8
42,848,087
libgflags invalid, checking gflags viability failed
I try to install folly on redhat 7.2,and i have downloaded gflags source code from github,compile and install. Howerver, Folly installation still faile. Anyone have some ideal? appreciated for your help. ERROR MESSAGE shown below: checking for main in -lgflags... yes checking for gflags viability... no configure: error: "libgflags invalid, see config.log for details"
libgflags invalid, checking gflags viability failed I try to install folly on redhat 7.2,and i have downloaded gflags source code from github,compile and install. Howerver, Folly installation still faile. Anyone have some ideal? appreciated for your help. ERROR MESSAGE shown below: checking for main in -lgflags... yes checking for gflags viability... no configure: error: "libgflags invalid, see config.log for details"
redhat, gflags
2
248
0
https://stackoverflow.com/questions/42848087/libgflags-invalid-checking-gflags-viability-failed
42,350,926
Difference between AWS Community AMI RHEL and Marketplace RHEL?
We have a AWS community AMI for RHEL provided by Red Hat then why one must go to AWS MarketPlace to have a RHEL subscription? What is the difference between the two; RHEL Community AMI (provided by Red Hat) v/s AWS Marketplace RHEL? I believe community AMIs are free of charge. When AWS shows EC2 pricing for different OS like Linux (free), RHEL (chargeable), SUSE (chargeable) etc then does RHEL OS pricing includes RHEL community AMI (provided by Red Hat) or it is only applicable to AWS Marketplace RHEL subscription? EC2 On Demand pricing (by OS): [URL]
Difference between AWS Community AMI RHEL and Marketplace RHEL? We have a AWS community AMI for RHEL provided by Red Hat then why one must go to AWS MarketPlace to have a RHEL subscription? What is the difference between the two; RHEL Community AMI (provided by Red Hat) v/s AWS Marketplace RHEL? I believe community AMIs are free of charge. When AWS shows EC2 pricing for different OS like Linux (free), RHEL (chargeable), SUSE (chargeable) etc then does RHEL OS pricing includes RHEL community AMI (provided by Red Hat) or it is only applicable to AWS Marketplace RHEL subscription? EC2 On Demand pricing (by OS): [URL]
amazon-web-services, amazon-ec2, redhat, rhel, amazon-ami
2
1,943
2
https://stackoverflow.com/questions/42350926/difference-between-aws-community-ami-rhel-and-marketplace-rhel
42,241,405
PHP/Redhat7: where do I find php-mcrypt package?
I need to use the module. I tried searching for it and it's not available! I've read the mcrypt_encrypt has been decrated in PHP7+. But I've also read that mcrypt_encrypt and openssl_encrypt will not give the same output. and the 3rd app (abandoned by vendor) I'm passing this value expects I used "MCRYPT_3DES" cipher This is the code I'm trying to run: $text = mcrypt_encrypt(MCRYPT_3DES, $key, $tempstring, MCRYPT_MODE_CBC, ''); Any ideas? Thanks [user@my-here.com]$ sudo yum list vailable "*mcrypt*" Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- : manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Error: No matching Packages to list [user@my-here.com]$ sudo yum list vailable "*mbstring*" Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- : manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Installed Packages php-mbstring.x86_64 5.4.16-42.el7 @development-RHEL-7-x86_64-Optional Available Packages php54-php-mbstring.x86_64 5.4.40-4.el7 development-RHEL-7-x86_64-RHSCL php55-php-mbstring.x86_64 5.5.21-5.el7 development-RHEL-7-x86_64-RHSCL rh-php56-php-mbstring.x86_64 5.6.25-1.el7 development-RHEL-7-x86_64-RHSCL rh-php70-php-mbstring.x86_64 7.0.10-2.el7 development-RHEL-7-x86_64-RHSCL
PHP/Redhat7: where do I find php-mcrypt package? I need to use the module. I tried searching for it and it's not available! I've read the mcrypt_encrypt has been decrated in PHP7+. But I've also read that mcrypt_encrypt and openssl_encrypt will not give the same output. and the 3rd app (abandoned by vendor) I'm passing this value expects I used "MCRYPT_3DES" cipher This is the code I'm trying to run: $text = mcrypt_encrypt(MCRYPT_3DES, $key, $tempstring, MCRYPT_MODE_CBC, ''); Any ideas? Thanks [user@my-here.com]$ sudo yum list vailable "*mcrypt*" Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- : manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Error: No matching Packages to list [user@my-here.com]$ sudo yum list vailable "*mbstring*" Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- : manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Installed Packages php-mbstring.x86_64 5.4.16-42.el7 @development-RHEL-7-x86_64-Optional Available Packages php54-php-mbstring.x86_64 5.4.40-4.el7 development-RHEL-7-x86_64-RHSCL php55-php-mbstring.x86_64 5.5.21-5.el7 development-RHEL-7-x86_64-RHSCL rh-php56-php-mbstring.x86_64 5.6.25-1.el7 development-RHEL-7-x86_64-RHSCL rh-php70-php-mbstring.x86_64 7.0.10-2.el7 development-RHEL-7-x86_64-RHSCL
php, redhat
2
454
0
https://stackoverflow.com/questions/42241405/php-redhat7-where-do-i-find-php-mcrypt-package
41,899,858
Same code and same data run on Tensorflow on Ubuntu (16.04-16.10) or Redhat (7.3) gives very different results
I'm running a Convolutional Neural Network script using tensorflow on the same python version using anaconda distribution (Python 2.7.12 :: Anaconda 4.2.0 (64-bit)) on the exact same dataset, on 2 different machines: LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1- noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch Distributor ID: RedHatEnterpriseWorkstation Description: Red Hat Enterprise Linux Workstation release 7.3 (Maipo) Release: 7.3 and Distributor ID: Ubuntu Description: Ubuntu 16.10 Release: 16.10 In both of the machines I have installed tensorflow using pip install tensorflow (v0.12) and before doing that I ran a conda update --all to make sure I had all the packages with same versions. Here the fun part starts. Only on the RedHat mahcine the ADAM optimizer converges (I predict the 3d location of neurons and it goes below 10um in around 500 epochs and the final accuracy on validation is around 5um on validation set). Excatly same code and exactly same data (cloned as they are from git) on the Ubuntu machine give much worse results: after 500 iteration the error is still around 48 um (starting 'random' accuracy is 69 um) and the final accuracy is around 56um on the validation set. Now, I checked that the data are exactly the same, they are shuffled with the same random seed and the training and validation set are the same. It only seems that the ADAM optimizer (or others as well) does not converge on the Ubuntu system, while it easily converges on the Redhat one. This are the first iterations: RedHat step 0 , training accuracy 81.1025 step 50 , training accuracy 30.0194 step 100 , training accuracy 25.263 step 150 , training accuracy 19.4822 step 200 , training accuracy 12.0292 step 250 , training accuracy 8.85796 step 300 , training accuracy 7.88442 step 350 , training accuracy 7.20183 step 400 , training accuracy 7.10236 step 450 , training accuracy 6.14335 step 500 , training accuracy 6.20344 Ubuntu step 0 , training accuracy 69.9108 step 50 , training accuracy 57.8822 step 100 , training accuracy 56.905 step 150 , training accuracy 54.9463 step 200 , training accuracy 53.7637 step 250 , training accuracy 53.3795 step 300 , training accuracy 50.9828 step 350 , training accuracy 50.4627 step 400 , training accuracy 48.7606 step 450 , training accuracy 47.8309 step 500 , training accuracy 47.8226 On the Redhat machine the CNN converges also using other optimizers. I tried the same code and the same data also on another machine with the same Redhat version installed and on Ubuntu 16.04 and I got the same weird results: working properly on Redhat, and not converging on Ubuntu. I have no idea how the results I get are so different, since I checked that all the installed packages version are the same.
Same code and same data run on Tensorflow on Ubuntu (16.04-16.10) or Redhat (7.3) gives very different results I'm running a Convolutional Neural Network script using tensorflow on the same python version using anaconda distribution (Python 2.7.12 :: Anaconda 4.2.0 (64-bit)) on the exact same dataset, on 2 different machines: LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1- noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch Distributor ID: RedHatEnterpriseWorkstation Description: Red Hat Enterprise Linux Workstation release 7.3 (Maipo) Release: 7.3 and Distributor ID: Ubuntu Description: Ubuntu 16.10 Release: 16.10 In both of the machines I have installed tensorflow using pip install tensorflow (v0.12) and before doing that I ran a conda update --all to make sure I had all the packages with same versions. Here the fun part starts. Only on the RedHat mahcine the ADAM optimizer converges (I predict the 3d location of neurons and it goes below 10um in around 500 epochs and the final accuracy on validation is around 5um on validation set). Excatly same code and exactly same data (cloned as they are from git) on the Ubuntu machine give much worse results: after 500 iteration the error is still around 48 um (starting 'random' accuracy is 69 um) and the final accuracy is around 56um on the validation set. Now, I checked that the data are exactly the same, they are shuffled with the same random seed and the training and validation set are the same. It only seems that the ADAM optimizer (or others as well) does not converge on the Ubuntu system, while it easily converges on the Redhat one. This are the first iterations: RedHat step 0 , training accuracy 81.1025 step 50 , training accuracy 30.0194 step 100 , training accuracy 25.263 step 150 , training accuracy 19.4822 step 200 , training accuracy 12.0292 step 250 , training accuracy 8.85796 step 300 , training accuracy 7.88442 step 350 , training accuracy 7.20183 step 400 , training accuracy 7.10236 step 450 , training accuracy 6.14335 step 500 , training accuracy 6.20344 Ubuntu step 0 , training accuracy 69.9108 step 50 , training accuracy 57.8822 step 100 , training accuracy 56.905 step 150 , training accuracy 54.9463 step 200 , training accuracy 53.7637 step 250 , training accuracy 53.3795 step 300 , training accuracy 50.9828 step 350 , training accuracy 50.4627 step 400 , training accuracy 48.7606 step 450 , training accuracy 47.8309 step 500 , training accuracy 47.8226 On the Redhat machine the CNN converges also using other optimizers. I tried the same code and the same data also on another machine with the same Redhat version installed and on Ubuntu 16.04 and I got the same weird results: working properly on Redhat, and not converging on Ubuntu. I have no idea how the results I get are so different, since I checked that all the installed packages version are the same.
python, python-2.7, tensorflow, redhat, ubuntu-16.04
2
229
0
https://stackoverflow.com/questions/41899858/same-code-and-same-data-run-on-tensorflow-on-ubuntu-16-04-16-10-or-redhat-7-3
41,549,032
Performance degradation with RHEL 6.5 and Apache Camel 2.15.0
I am using Apache Camel 2.15.0 version in my web application and using the Rest Component for defining the REST endpoints using Java DSL. The application runs properly with Windows platform and Ubuntu 16.04, however I had seen the very poor performance while testing with the RHEL 6.5 and 7.3. After enabling the logs for RHEL, I had noticed that for each HTTP request it takes long time from Servlet Filter to the RouteBuilder. What can be the possible causes behind this ? I am attaching the logs to get better idea: 2017-01-10 12:28:37,005 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:28:37,005 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:28:37,005 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:28:38,728 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,729 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,729 [DEBUG] - Executing prepared SQL statement [] 2017-01-10 12:28:38,729 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,730 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,731 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,731 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,731 [DEBUG] - Executing prepared SQL statement [] 2017-01-10 12:28:38,731 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,734 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,790 [DEBUG] - >>>> Endpoint[bean://m?method=getDynamicMessagesForSection%28%24%7Bheader.owner%7D%2C%24%7Bheader.owner_roles%7D%2C%24%7Bheader.section%7D%2C%24%7Bheader.timezone%7D%29] Exchange[HttpMessage@0x6f043df0] 2017-01-10 12:28:38,790 [DEBUG] - >>>> Endpoint[bean://m?method=getDynamicMessagesForSection%28%24%7Bheader.owner%7D%2C%24%7Bheader.owner_roles%7D%2C%24%7Bheader.section%7D%2C%24%7Bheader.timezone%7D%29] Exchange[HttpMessage@0x6f043df0] 2017-01-10 12:28:38,790 [DEBUG] - Returning cached instance of singleton bean 'm' 2017-01-10 12:28:38,790 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,791 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,791 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:38,791 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,793 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,793 [DEBUG] - Setting bean invocation result on the OUT message: [] 2017-01-10 12:28:38,793 [DEBUG] - Setting bean invocation result on the OUT message: [] 2017-01-10 12:28:38,793 [DEBUG] - Processing onAfterRoute: Exchange[Message: [ ]] 2017-01-10 12:28:38,793 [DEBUG] - Processing onAfterRoute: Exchange[Message: [ ]] 2017-01-10 12:28:38,794 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,794 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,794 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:38,794 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,796 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,796 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:28:38,796 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:28:38,797 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:28:38,797 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:28:38,985 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,986 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,986 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:38,986 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,986 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,986 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,986 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:38,986 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,988 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,988 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,988 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,988 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:38,989 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,989 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,989 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,990 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,990 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:38,990 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,991 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,997 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:39,954 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:39,954 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:39,954 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:39,954 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:39,956 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:39,956 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:39,957 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:39,957 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:39,957 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:39,958 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:56,193 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:28:56,193 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:28:57,007 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:28:57,007 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:29:17,013 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:29:17,013 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:29:26,167 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:29:26,167 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:29:37,015 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:29:37,015 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:29:55,218 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:29:55,218 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:29:56,051 [DEBUG] - >>>> Endpoint[bean://q?method=getAllRoles%28%24%7Bheader.owner%7D%29] Exchange[HttpMessage@0x77233160] 2017-01-10 12:29:56,051 [DEBUG] - >>>> Endpoint[bean://q?method=getAllRoles%28%24%7Bheader.owner%7D%29] Exchange[HttpMessage@0x77233160] 2017-01-10 12:29:56,051 [DEBUG] - Returning cached instance of singleton bean 'q' 2017-01-10 12:29:56,051 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:29:56,051 [DEBUG] - Executing SQL query [] 2017-01-10 12:29:56,051 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:29:56,052 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:29:56,052 [DEBUG] - Setting bean invocation result on the OUT message: [] 2017-01-10 12:29:56,052 [DEBUG] - Setting bean invocation result on the OUT message: [] 2017-01-10 12:29:56,053 [DEBUG] - Processing onAfterRoute: Exchange[Message: [ ]] 2017-01-10 12:29:56,053 [DEBUG] - Processing onAfterRoute: Exchange[Message: [ ]] 2017-01-10 12:29:56,053 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:29:56,053 [DEBUG] - Executing prepared SQL query 2017-01-10 12:29:56,053 [DEBUG] - Executing prepared SQL statement [] 2017-01-10 12:29:56,053 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:29:56,054 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:29:56,054 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:29:56,054 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:29:56,055 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:29:56,055 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:29:57,020 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:29:57,020 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:30:16,234 [DEBUG] - >>>> Endpoint[bean://f?method=getFilesAndFolder%28%24%7Bheader.category%7D%2C%24%7Bheader.owner%7D%2C%24%7Bheader.archiveMode%7D%2C%24%7Bheader.timezone%7D%29] Exchange[HttpMessage@0x79501044] 2017-01-10 12:30:16,234 [DEBUG] - >>>> Endpoint[bean://f?method=getFilesAndFolder%28%24%7Bheader.category%7D%2C%24%7Bheader.owner%7D%2C%24%7Bheader.archiveMode%7D%2C%24%7Bheader.timezone%7D%29] Exchange[HttpMessage@0x79501044] 2017-01-10 12:30:16,234 [DEBUG] - Returning cached instance of singleton bean 'f' 2017-01-10 12:30:16,234 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:30:16,234 [DEBUG] - Executing prepared SQL query 2017-01-10 12:30:16,234 [DEBUG] - Executing prepared SQL statement [] 2017-01-10 12:30:16,234 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:30:16,235 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:30:16,235 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:30:16,235 [DEBUG] - Executing prepared SQL query 2017-01-10 12:30:16,236 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:30:16,236 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:30:16,239 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:30:16,239 [DEBUG] - Setting bean invocation result on the OUT message: [Offices1/xmlDATASETSAdminPVT1Admin2017-01-09 16:30:50.012017-01-09 16:30:50.0Admin] 2017-01-10 12:30:16,239 [DEBUG] - Setting bean invocation result on the OUT message: [Offices1/xmlDATASETSAdminPVT1Admin2017-01-09 16:30:50.012017-01-09 16:30:50.0Admin] 2017-01-10 12:30:16,240 [DEBUG] - Processing onAfterRoute: Exchange[Message: [ ]] 2017-01-10 12:30:16,240 [DEBUG] - Processing onAfterRoute: Exchange[Message: []] 2017-01-10 12:30:16,240 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:30:16,240 [DEBUG] - Executing prepared SQL query 2017-01-10 12:30:16,240 [DEBUG] - Executing prepared SQL statement [] 2017-01-10 12:30:16,240 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:30:16,241 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:30:16,241 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:30:16,241 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:30:16,242 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:30:16,242 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:30:17,020 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:30:17,020 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:30:22,175 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:30:22,175 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:30:31,056 [DEBUG] - >>>> Endpoint[bean://u?method=getAllUsers%28%24%7Bheader.userName%7D%29] Exchange[HttpMessage@0x272487ca] 2017-01-10 12:30:31,056 [DEBUG] - >>>> Endpoint[bean://u?method=getAllUsers%28%24%7Bheader.userName%7D%29] Exchange[HttpMessage@0x272487ca] 2017-01-10 12:30:31,056 [DEBUG] - Returning cached instance of singleton bean 'u' 2017-01-10 12:30:31,056 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:30:31,057 [DEBUG] - Executing prepared SQL query 2017-01-10 12:30:31,057 [DEBUG] - Executing prepared SQL statement [select * from ai_user] 2017-01-10 12:30:31,057 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:30:31,059 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:30:31,059 [DEBUG] - Setting bean invocation result on the OUT message: [com.b.UserBean@7d6cbe02] 2017-01-10 12:30:31,059 [DEBUG] - Setting bean invocation result on the OUT message: [com.b.UserBean@7d6cbe02] 2017-01-10 12:30:31,061 [DEBUG] - Processing onAfterRoute: Exchange[Message: []] 2017-01-10 12:30:31,061 [DEBUG] - Processing onAfterRoute: Exchange[Message: []] 2017-01-10 12:30:31,063 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:30:31,063 [DEBUG] - Executing prepared SQL query 2017-01-10 12:30:31,063 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:30:31,063 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:30:31,064 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:30:31,064 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:30:31,064 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:30:31,065 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:30:31,065 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:30:37,042 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:30:37,042 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:30:50,462 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:30:50,462 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:30:57,034 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:30:57,034 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:31:17,031 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:31:17,031 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:31:18,901 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:31:18,901 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:31:37,035 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:31:37,035 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:31:43,373 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:31:43,373 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:31:57,033 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:31:57,033 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:07,289 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:32:07,289 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:32:17,033 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:17,033 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:34,610 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:32:34,610 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:32:37,037 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:37,037 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:57,038 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:57,038 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:59,451 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:32:59,451 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:33:17,039 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:33:17,039 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:33:22,677 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:33:22,677 [DEBUG] - batch acquisition of 0 triggers
Performance degradation with RHEL 6.5 and Apache Camel 2.15.0 I am using Apache Camel 2.15.0 version in my web application and using the Rest Component for defining the REST endpoints using Java DSL. The application runs properly with Windows platform and Ubuntu 16.04, however I had seen the very poor performance while testing with the RHEL 6.5 and 7.3. After enabling the logs for RHEL, I had noticed that for each HTTP request it takes long time from Servlet Filter to the RouteBuilder. What can be the possible causes behind this ? I am attaching the logs to get better idea: 2017-01-10 12:28:37,005 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:28:37,005 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:28:37,005 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:28:38,728 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,729 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,729 [DEBUG] - Executing prepared SQL statement [] 2017-01-10 12:28:38,729 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,730 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,731 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,731 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,731 [DEBUG] - Executing prepared SQL statement [] 2017-01-10 12:28:38,731 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,734 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,790 [DEBUG] - >>>> Endpoint[bean://m?method=getDynamicMessagesForSection%28%24%7Bheader.owner%7D%2C%24%7Bheader.owner_roles%7D%2C%24%7Bheader.section%7D%2C%24%7Bheader.timezone%7D%29] Exchange[HttpMessage@0x6f043df0] 2017-01-10 12:28:38,790 [DEBUG] - >>>> Endpoint[bean://m?method=getDynamicMessagesForSection%28%24%7Bheader.owner%7D%2C%24%7Bheader.owner_roles%7D%2C%24%7Bheader.section%7D%2C%24%7Bheader.timezone%7D%29] Exchange[HttpMessage@0x6f043df0] 2017-01-10 12:28:38,790 [DEBUG] - Returning cached instance of singleton bean 'm' 2017-01-10 12:28:38,790 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,791 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,791 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:38,791 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,793 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,793 [DEBUG] - Setting bean invocation result on the OUT message: [] 2017-01-10 12:28:38,793 [DEBUG] - Setting bean invocation result on the OUT message: [] 2017-01-10 12:28:38,793 [DEBUG] - Processing onAfterRoute: Exchange[Message: [ ]] 2017-01-10 12:28:38,793 [DEBUG] - Processing onAfterRoute: Exchange[Message: [ ]] 2017-01-10 12:28:38,794 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,794 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,794 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:38,794 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,796 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,796 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:28:38,796 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:28:38,797 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:28:38,797 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:28:38,985 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,986 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,986 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:38,986 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,986 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,986 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,986 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:38,986 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,988 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,988 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,988 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,988 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:38,989 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,989 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,989 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:38,990 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:38,990 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:38,990 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:38,991 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:38,997 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:39,954 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:39,954 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:39,954 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:39,954 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:39,956 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:39,956 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:28:39,957 [DEBUG] - Executing prepared SQL query 2017-01-10 12:28:39,957 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:28:39,957 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:28:39,958 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:28:56,193 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:28:56,193 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:28:57,007 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:28:57,007 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:29:17,013 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:29:17,013 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:29:26,167 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:29:26,167 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:29:37,015 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:29:37,015 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:29:55,218 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:29:55,218 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:29:56,051 [DEBUG] - >>>> Endpoint[bean://q?method=getAllRoles%28%24%7Bheader.owner%7D%29] Exchange[HttpMessage@0x77233160] 2017-01-10 12:29:56,051 [DEBUG] - >>>> Endpoint[bean://q?method=getAllRoles%28%24%7Bheader.owner%7D%29] Exchange[HttpMessage@0x77233160] 2017-01-10 12:29:56,051 [DEBUG] - Returning cached instance of singleton bean 'q' 2017-01-10 12:29:56,051 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:29:56,051 [DEBUG] - Executing SQL query [] 2017-01-10 12:29:56,051 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:29:56,052 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:29:56,052 [DEBUG] - Setting bean invocation result on the OUT message: [] 2017-01-10 12:29:56,052 [DEBUG] - Setting bean invocation result on the OUT message: [] 2017-01-10 12:29:56,053 [DEBUG] - Processing onAfterRoute: Exchange[Message: [ ]] 2017-01-10 12:29:56,053 [DEBUG] - Processing onAfterRoute: Exchange[Message: [ ]] 2017-01-10 12:29:56,053 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:29:56,053 [DEBUG] - Executing prepared SQL query 2017-01-10 12:29:56,053 [DEBUG] - Executing prepared SQL statement [] 2017-01-10 12:29:56,053 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:29:56,054 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:29:56,054 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:29:56,054 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:29:56,055 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:29:56,055 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:29:57,020 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:29:57,020 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:30:16,234 [DEBUG] - >>>> Endpoint[bean://f?method=getFilesAndFolder%28%24%7Bheader.category%7D%2C%24%7Bheader.owner%7D%2C%24%7Bheader.archiveMode%7D%2C%24%7Bheader.timezone%7D%29] Exchange[HttpMessage@0x79501044] 2017-01-10 12:30:16,234 [DEBUG] - >>>> Endpoint[bean://f?method=getFilesAndFolder%28%24%7Bheader.category%7D%2C%24%7Bheader.owner%7D%2C%24%7Bheader.archiveMode%7D%2C%24%7Bheader.timezone%7D%29] Exchange[HttpMessage@0x79501044] 2017-01-10 12:30:16,234 [DEBUG] - Returning cached instance of singleton bean 'f' 2017-01-10 12:30:16,234 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:30:16,234 [DEBUG] - Executing prepared SQL query 2017-01-10 12:30:16,234 [DEBUG] - Executing prepared SQL statement [] 2017-01-10 12:30:16,234 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:30:16,235 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:30:16,235 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:30:16,235 [DEBUG] - Executing prepared SQL query 2017-01-10 12:30:16,236 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:30:16,236 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:30:16,239 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:30:16,239 [DEBUG] - Setting bean invocation result on the OUT message: [Offices1/xmlDATASETSAdminPVT1Admin2017-01-09 16:30:50.012017-01-09 16:30:50.0Admin] 2017-01-10 12:30:16,239 [DEBUG] - Setting bean invocation result on the OUT message: [Offices1/xmlDATASETSAdminPVT1Admin2017-01-09 16:30:50.012017-01-09 16:30:50.0Admin] 2017-01-10 12:30:16,240 [DEBUG] - Processing onAfterRoute: Exchange[Message: [ ]] 2017-01-10 12:30:16,240 [DEBUG] - Processing onAfterRoute: Exchange[Message: []] 2017-01-10 12:30:16,240 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:30:16,240 [DEBUG] - Executing prepared SQL query 2017-01-10 12:30:16,240 [DEBUG] - Executing prepared SQL statement [] 2017-01-10 12:30:16,240 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:30:16,241 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:30:16,241 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:30:16,241 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:30:16,242 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:30:16,242 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:30:17,020 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:30:17,020 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:30:22,175 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:30:22,175 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:30:31,056 [DEBUG] - >>>> Endpoint[bean://u?method=getAllUsers%28%24%7Bheader.userName%7D%29] Exchange[HttpMessage@0x272487ca] 2017-01-10 12:30:31,056 [DEBUG] - >>>> Endpoint[bean://u?method=getAllUsers%28%24%7Bheader.userName%7D%29] Exchange[HttpMessage@0x272487ca] 2017-01-10 12:30:31,056 [DEBUG] - Returning cached instance of singleton bean 'u' 2017-01-10 12:30:31,056 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:30:31,057 [DEBUG] - Executing prepared SQL query 2017-01-10 12:30:31,057 [DEBUG] - Executing prepared SQL statement [select * from ai_user] 2017-01-10 12:30:31,057 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:30:31,059 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:30:31,059 [DEBUG] - Setting bean invocation result on the OUT message: [com.b.UserBean@7d6cbe02] 2017-01-10 12:30:31,059 [DEBUG] - Setting bean invocation result on the OUT message: [com.b.UserBean@7d6cbe02] 2017-01-10 12:30:31,061 [DEBUG] - Processing onAfterRoute: Exchange[Message: []] 2017-01-10 12:30:31,061 [DEBUG] - Processing onAfterRoute: Exchange[Message: []] 2017-01-10 12:30:31,063 [DEBUG] - Returning cached instance of singleton bean 'dataSource' 2017-01-10 12:30:31,063 [DEBUG] - Executing prepared SQL query 2017-01-10 12:30:31,063 [DEBUG] - Executing prepared SQL statement 2017-01-10 12:30:31,063 [DEBUG] - Fetching JDBC Connection from DataSource 2017-01-10 12:30:31,064 [DEBUG] - Returning JDBC Connection to DataSource 2017-01-10 12:30:31,064 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:30:31,064 [DEBUG] - Setting bean invocation result on the OUT message: 2 2017-01-10 12:30:31,065 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:30:31,065 [DEBUG] - Streaming response in chunked mode with buffer size 8192 2017-01-10 12:30:37,042 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:30:37,042 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:30:50,462 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:30:50,462 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:30:57,034 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:30:57,034 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:31:17,031 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:31:17,031 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:31:18,901 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:31:18,901 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:31:37,035 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:31:37,035 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:31:43,373 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:31:43,373 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:31:57,033 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:31:57,033 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:07,289 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:32:07,289 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:32:17,033 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:17,033 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:34,610 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:32:34,610 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:32:37,037 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:37,037 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:57,038 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:57,038 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:32:59,451 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:32:59,451 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:33:17,039 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:33:17,039 [DEBUG] - ClusterManager: Check-in complete. 2017-01-10 12:33:22,677 [DEBUG] - batch acquisition of 0 triggers 2017-01-10 12:33:22,677 [DEBUG] - batch acquisition of 0 triggers
java, apache-camel, redhat
2
84
0
https://stackoverflow.com/questions/41549032/performance-degradation-with-rhel-6-5-and-apache-camel-2-15-0
41,352,129
Keycloak port 39008 and port scan
I did a port scan using nmap on my machine running Keycloak and was surprised to find that port 39008 was open. According to nmap : 39008/tcp open unknown Using the following two commands I found that it is keycloak: netstat -tulpn | grep 39008 tcp 0 0 0.0.0.0:39008 0.0.0.0:* LISTEN 17270/java ps -Af | grep 17270 me 17270 17223 0 Dec22 ? 00:13:05 java ...-Djboss.home.dir=/.../keycloak-2.4.0.Final I cannot find any reference to this port in the config or the docs. What is this port used for?
Keycloak port 39008 and port scan I did a port scan using nmap on my machine running Keycloak and was surprised to find that port 39008 was open. According to nmap : 39008/tcp open unknown Using the following two commands I found that it is keycloak: netstat -tulpn | grep 39008 tcp 0 0 0.0.0.0:39008 0.0.0.0:* LISTEN 17270/java ps -Af | grep 17270 me 17270 17223 0 Dec22 ? 00:13:05 java ...-Djboss.home.dir=/.../keycloak-2.4.0.Final I cannot find any reference to this port in the config or the docs. What is this port used for?
jboss, redhat, keycloak
2
241
0
https://stackoverflow.com/questions/41352129/keycloak-port-39008-and-port-scan
41,123,602
Redhat subscription issue
Hi I am installation openshift 3.3 , 30 days trial version on RHEL 7.3, have registered the system using subscription-manager . I was able to attach pool id but suddenly its giving this "No available subscription pools to list" though in redhat portal I can see my subscription is still active. Any idea why this happens? I have faced this issue several time with redhat subscriptions FAILED! => {"changed": false, "failed": true, "msg": "No OpenShift version available, please ensure your systems are fully registered and have access to appropriate yum repositories."} any help?
Redhat subscription issue Hi I am installation openshift 3.3 , 30 days trial version on RHEL 7.3, have registered the system using subscription-manager . I was able to attach pool id but suddenly its giving this "No available subscription pools to list" though in redhat portal I can see my subscription is still active. Any idea why this happens? I have faced this issue several time with redhat subscriptions FAILED! => {"changed": false, "failed": true, "msg": "No OpenShift version available, please ensure your systems are fully registered and have access to appropriate yum repositories."} any help?
redhat, rhel7, openshift-enterprise
2
666
1
https://stackoverflow.com/questions/41123602/redhat-subscription-issue
40,436,024
Not able to connect PDO PGSQL with Codeigniter
I have an issue with regarding the connection between Codeigniter & PDO_PGSQL I have tried to connect in RedHat 6.8 Server OS. My config file: $active_group = 'default'; $query_builder = TRUE; $db['default']['hostname'] = 'pgsql:host=<myip>;dbname=shlydb;'; $db['default']['username'] = 'root'; $db['default']['password'] = '123'; $db['default']['database'] = 'shlydb'; $db['default']['dbdriver'] = 'pdo'; $db['default']['dbprefix'] = ''; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = ''; $db['default']['char_set'] = 'utf8'; $db['default']['dbcollat'] = 'utf8_general_ci'; $db['default']['swap_pre'] = ''; $db['default']['autoinit'] = TRUE; $db['default']['stricton'] = FALSE; $db['default']['port'] = 5432; also attaching the db_connect function.. public function db_connect($persistent = FALSE) { $this->options[PDO::ATTR_PERSISTENT] = $persistent; try { return new PDO($this->dsn, $this->username, $this->password, $this->options); } catch (PDOException $e) { if ($this->db_debug && empty($this->failover)) { $this->display_error($e->getMessage(), '', TRUE); } return FALSE; } } But showing an error while running, A PHP Error was encountered Severity: Warning Message: PDO::__construct(): SQLSTATE[IM001]: Driver does not support this function: driver does not support setting attributes Filename: pdo/pdo_driver.php Line Number: 133
Not able to connect PDO PGSQL with Codeigniter I have an issue with regarding the connection between Codeigniter & PDO_PGSQL I have tried to connect in RedHat 6.8 Server OS. My config file: $active_group = 'default'; $query_builder = TRUE; $db['default']['hostname'] = 'pgsql:host=<myip>;dbname=shlydb;'; $db['default']['username'] = 'root'; $db['default']['password'] = '123'; $db['default']['database'] = 'shlydb'; $db['default']['dbdriver'] = 'pdo'; $db['default']['dbprefix'] = ''; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = ''; $db['default']['char_set'] = 'utf8'; $db['default']['dbcollat'] = 'utf8_general_ci'; $db['default']['swap_pre'] = ''; $db['default']['autoinit'] = TRUE; $db['default']['stricton'] = FALSE; $db['default']['port'] = 5432; also attaching the db_connect function.. public function db_connect($persistent = FALSE) { $this->options[PDO::ATTR_PERSISTENT] = $persistent; try { return new PDO($this->dsn, $this->username, $this->password, $this->options); } catch (PDOException $e) { if ($this->db_debug && empty($this->failover)) { $this->display_error($e->getMessage(), '', TRUE); } return FALSE; } } But showing an error while running, A PHP Error was encountered Severity: Warning Message: PDO::__construct(): SQLSTATE[IM001]: Driver does not support this function: driver does not support setting attributes Filename: pdo/pdo_driver.php Line Number: 133
codeigniter, pdo, server, redhat
2
641
1
https://stackoverflow.com/questions/40436024/not-able-to-connect-pdo-pgsql-with-codeigniter
39,869,647
How to use the same environment as the one I am using in cron jobs?
I've read a lot of threads on crontab and env, but I still cannot set it right. I used env > env_setting Because I need to use the same env setting and bash to run, so in crontab -e */1 * * * * env - cat /path/to/env_setting /bin/bash ; /bin/bash /path/to/program.sh But it doesn't work. How to use the same environment as the one I am using in cron jobs? P.S I'm using Red Hat. Edit: I tried the following in program.sh env >> temp.log 2>&1 env - cat /path/to/env_setting env >> temp.log 2>&1 But the 2 env outputs in temp.log are exactly the same. Didn't employ the env_setting
How to use the same environment as the one I am using in cron jobs? I've read a lot of threads on crontab and env, but I still cannot set it right. I used env > env_setting Because I need to use the same env setting and bash to run, so in crontab -e */1 * * * * env - cat /path/to/env_setting /bin/bash ; /bin/bash /path/to/program.sh But it doesn't work. How to use the same environment as the one I am using in cron jobs? P.S I'm using Red Hat. Edit: I tried the following in program.sh env >> temp.log 2>&1 env - cat /path/to/env_setting env >> temp.log 2>&1 But the 2 env outputs in temp.log are exactly the same. Didn't employ the env_setting
linux, bash, unix, cron, redhat
2
268
1
https://stackoverflow.com/questions/39869647/how-to-use-the-same-environment-as-the-one-i-am-using-in-cron-jobs
39,377,451
args python parser, a whitespace and Spark
I have this code in foo.py : from argparse import ArgumentParser parser = ArgumentParser() parser.add_argument('--label', dest='label', type=str, default=None, required=True, help='label') args = parser.parse_args() and when I execute: spark-submit --master yarn --deploy-mode cluster foo.py --label 106466153-Gateway Arch I get this error at Stdout: usage: foo.py [-h] --label LABEL foo.py: error: unrecognized arguments: Arch Any idea(s) please? Attempts: --label "106466153-Gateway Arch" --label 106466153-Gateway\ Arch --label "106466153-Gateway\ Arch" --label="106466153-Gateway Arch" --label 106466153-Gateway\\\ Arch --label 106466153-Gateway\\\\\\\ Arch All attempts produce the same error. I am using Red Hat Enterprise Linux Server release 6.4 (Santiago).
args python parser, a whitespace and Spark I have this code in foo.py : from argparse import ArgumentParser parser = ArgumentParser() parser.add_argument('--label', dest='label', type=str, default=None, required=True, help='label') args = parser.parse_args() and when I execute: spark-submit --master yarn --deploy-mode cluster foo.py --label 106466153-Gateway Arch I get this error at Stdout: usage: foo.py [-h] --label LABEL foo.py: error: unrecognized arguments: Arch Any idea(s) please? Attempts: --label "106466153-Gateway Arch" --label 106466153-Gateway\ Arch --label "106466153-Gateway\ Arch" --label="106466153-Gateway Arch" --label 106466153-Gateway\\\ Arch --label 106466153-Gateway\\\\\\\ Arch All attempts produce the same error. I am using Red Hat Enterprise Linux Server release 6.4 (Santiago).
python, linux, apache-spark, io, redhat
2
343
1
https://stackoverflow.com/questions/39377451/args-python-parser-a-whitespace-and-spark
39,149,872
How to get &quot;g++ -mx32&quot; to work on RHEL 7.2
I am new to x64_86, but forced to use it because RedHat dropped its 32-bit OS support in RHEL 7.x. I have to complile a lot of code, and am not ready to jump to x64 yet (because I do not need 64-bit addresses and do not want to face all related porting issues). So I have considered using -m32 and -mx32, and decided that -mx32 is the best route for me. However, while -m32 works fine on my build machine, when I use -mx32, I get this error: In file included from /usr/include/features.h:399:0, from /usr/include/string.h:25, from zz.cpp:1: /usr/include/gnu/stubs.h:13:28: fatal error: gnu/stubs-x32.h: No such file or directory # include <gnu/stubs-x32.h> ^ compilation terminated. I searched the web for solutions and some links indicate that I have to install some mysterious "multilib" rpms for g++ and gcc, however, I cannot find these anywhere. Others suggest that I have to install Linux in the x32 mode and build libgcc for x32, which sound extreme. Any ideas or leads? Did someone actually try g++ -mx32? Maybe it is not even supported on the RH platform... Thanks! P.S. In order to get the "-m32" option to work I had to install: yum install glibc-devel.i686 libgcc.i686 libstdc++-devel.i686 ncurses-devel.i686 This one fails (yum cannot find these RPMs) - allegedly these are required for -mx32 to work: yum install gcc-multilib g++-multilib :(
How to get &quot;g++ -mx32&quot; to work on RHEL 7.2 I am new to x64_86, but forced to use it because RedHat dropped its 32-bit OS support in RHEL 7.x. I have to complile a lot of code, and am not ready to jump to x64 yet (because I do not need 64-bit addresses and do not want to face all related porting issues). So I have considered using -m32 and -mx32, and decided that -mx32 is the best route for me. However, while -m32 works fine on my build machine, when I use -mx32, I get this error: In file included from /usr/include/features.h:399:0, from /usr/include/string.h:25, from zz.cpp:1: /usr/include/gnu/stubs.h:13:28: fatal error: gnu/stubs-x32.h: No such file or directory # include <gnu/stubs-x32.h> ^ compilation terminated. I searched the web for solutions and some links indicate that I have to install some mysterious "multilib" rpms for g++ and gcc, however, I cannot find these anywhere. Others suggest that I have to install Linux in the x32 mode and build libgcc for x32, which sound extreme. Any ideas or leads? Did someone actually try g++ -mx32? Maybe it is not even supported on the RH platform... Thanks! P.S. In order to get the "-m32" option to work I had to install: yum install glibc-devel.i686 libgcc.i686 libstdc++-devel.i686 ncurses-devel.i686 This one fails (yum cannot find these RPMs) - allegedly these are required for -mx32 to work: yum install gcc-multilib g++-multilib :(
c++, g++, redhat, 32bit-64bit, gnu
2
1,786
1
https://stackoverflow.com/questions/39149872/how-to-get-g-mx32-to-work-on-rhel-7-2
38,576,638
Transfer i/o files &amp; invoke scripts betwen 2 applications running in different Red Hat Linux Server
Issue Background: Moving a Java/J2EE application from dedicated RedHat Linux server to cloud RedHat Linux server. I am analysing the Batch processing jobs involved in this application to implement similar processing in cloud environment. Current Approach: We have 2 applications App1 & App2 both are in same RedHat Linux server. Both applications have shared directories in the same server. Also Appl can call shell scripts in App2's directory to get some job done. App2's process: External system send input file(.DAT) to App2 via NDM Jobs. Received input file(.DAT) will be placed in App2's input file directory. Process the records in file using Java/J2EE program/component Generate the outputfile. Place it in App1's shared directory. App1 has a filewatcher pointed to this directory to consume this file. Upcoming approach: App2 will be moved to cloud Red Hat Linux server. App2 will be running in atleast 2 nodes. Challenges: External system job still point to same old directory in non cloud Linux server. After processing, output file must be in App1's shared directory. Expectation: App2's process running in cloud is, expected to read & process this file. Request you all to suggest a best approach for this requirement. Can we have FTP or REST-webservice to read the input file from non cloud Linux server? 2) App1 has business requirement to call shell scripts in App2. How can we provide a service to call App2's shell script located in cloud server. I am new to cloud. Please excuse me if my questions are irrelevant or trivial. Thank You In Advance.
Transfer i/o files &amp; invoke scripts betwen 2 applications running in different Red Hat Linux Server Issue Background: Moving a Java/J2EE application from dedicated RedHat Linux server to cloud RedHat Linux server. I am analysing the Batch processing jobs involved in this application to implement similar processing in cloud environment. Current Approach: We have 2 applications App1 & App2 both are in same RedHat Linux server. Both applications have shared directories in the same server. Also Appl can call shell scripts in App2's directory to get some job done. App2's process: External system send input file(.DAT) to App2 via NDM Jobs. Received input file(.DAT) will be placed in App2's input file directory. Process the records in file using Java/J2EE program/component Generate the outputfile. Place it in App1's shared directory. App1 has a filewatcher pointed to this directory to consume this file. Upcoming approach: App2 will be moved to cloud Red Hat Linux server. App2 will be running in atleast 2 nodes. Challenges: External system job still point to same old directory in non cloud Linux server. After processing, output file must be in App1's shared directory. Expectation: App2's process running in cloud is, expected to read & process this file. Request you all to suggest a best approach for this requirement. Can we have FTP or REST-webservice to read the input file from non cloud Linux server? 2) App1 has business requirement to call shell scripts in App2. How can we provide a service to call App2's shell script located in cloud server. I am new to cloud. Please excuse me if my questions are irrelevant or trivial. Thank You In Advance.
jakarta-ee, cloud, redhat, restful-architecture, devops
2
84
1
https://stackoverflow.com/questions/38576638/transfer-i-o-files-invoke-scripts-betwen-2-applications-running-in-different-r
37,025,067
RJDBC connection unreliable
I'm trying to run a Rscript in red hat linux server. Rscript connects and sends query to Oracle DB, using the method dbConnect & dbSendQuery, provided by the package "RJDBC". I have tried connecting many times, and failed in the majority of them when the script attempts to call dbConnect method to connect. When I do fail, I get following error: Loading required package: RJDBC Loading required package: methods Loading required package: DBI Loading required package: rJava [1] "Driver is created. Establishing Connection" #RJDBC driver called. Error in .jcall("java/sql/DriverManager", "Ljava/sql/Connection;", "getConnection", : ignoring SIGPIPE signal Calls: dbConnect -> dbConnect -> .local -> .jcall -> .External Execution halted what baffles me is that I have seen instances where the connection did get established, after which the rest of the script runs successfully. What's more, in Rstudio that's installed within the server, the connection is always successful. Only when I run the same script in commandLine do I observe connection failure. I'm really lost as to where I can begin to find what's wrong. Any advice would be most appreciated.
RJDBC connection unreliable I'm trying to run a Rscript in red hat linux server. Rscript connects and sends query to Oracle DB, using the method dbConnect & dbSendQuery, provided by the package "RJDBC". I have tried connecting many times, and failed in the majority of them when the script attempts to call dbConnect method to connect. When I do fail, I get following error: Loading required package: RJDBC Loading required package: methods Loading required package: DBI Loading required package: rJava [1] "Driver is created. Establishing Connection" #RJDBC driver called. Error in .jcall("java/sql/DriverManager", "Ljava/sql/Connection;", "getConnection", : ignoring SIGPIPE signal Calls: dbConnect -> dbConnect -> .local -> .jcall -> .External Execution halted what baffles me is that I have seen instances where the connection did get established, after which the rest of the script runs successfully. What's more, in Rstudio that's installed within the server, the connection is always successful. Only when I run the same script in commandLine do I observe connection failure. I'm really lost as to where I can begin to find what's wrong. Any advice would be most appreciated.
r, linux, redhat, rjdbc
2
394
0
https://stackoverflow.com/questions/37025067/rjdbc-connection-unreliable
36,957,795
issue in shell script not working in redhat working fine in ubuntu
#!/bin/sh # Define your function here Hello () { echo "Hello World" } Hello above script is running fine in ubuntu but showing following error on redhat machine "syntax error near unexpected token ' { "
issue in shell script not working in redhat working fine in ubuntu #!/bin/sh # Define your function here Hello () { echo "Hello World" } Hello above script is running fine in ubuntu but showing following error on redhat machine "syntax error near unexpected token ' { "
redhat
2
899
1
https://stackoverflow.com/questions/36957795/issue-in-shell-script-not-working-in-redhat-working-fine-in-ubuntu
35,164,985
Deploying meteor application throws MongoError: Authentication failed
The meteor app I'm currently working on should be deployed on an in-house RedHat server. I used meteor build <outputdir> --architecture os.linux.86_64 to create the bundle and uploaded it to the target server, which has mongodb 3.2 and nodejs 0.10.40 installed. The server runs a local mongodb on port 27017 with the user meteor and the database myapp . User and db were created in the following manner. use myapp db.createUser( { user: "meteor", pwd: "meteor", roles: [ "readWrite" ] } ) Continuing, I did what the README asked me to do and ran the following commands in my untared app bundle. $ (cd programs/server && npm install) $ export MONGO_URL='mongodb://meteor:meteor@127.0.0.1:27017/myapp' When I first exported the MONGO_URL I typed the port wrong and got a mongo error: auth error exception after running node main.js . After correcting my mistake the exception changed to Mongo Error: Authentication failed. Yet, it is possible to connect without a problem to the mongo shell by typing mongo -u meteor -p meteor --host 127.0.0.1 --port 27017 . Did anyone had the same problem and found a solution for it?
Deploying meteor application throws MongoError: Authentication failed The meteor app I'm currently working on should be deployed on an in-house RedHat server. I used meteor build <outputdir> --architecture os.linux.86_64 to create the bundle and uploaded it to the target server, which has mongodb 3.2 and nodejs 0.10.40 installed. The server runs a local mongodb on port 27017 with the user meteor and the database myapp . User and db were created in the following manner. use myapp db.createUser( { user: "meteor", pwd: "meteor", roles: [ "readWrite" ] } ) Continuing, I did what the README asked me to do and ran the following commands in my untared app bundle. $ (cd programs/server && npm install) $ export MONGO_URL='mongodb://meteor:meteor@127.0.0.1:27017/myapp' When I first exported the MONGO_URL I typed the port wrong and got a mongo error: auth error exception after running node main.js . After correcting my mistake the exception changed to Mongo Error: Authentication failed. Yet, it is possible to connect without a problem to the mongo shell by typing mongo -u meteor -p meteor --host 127.0.0.1 --port 27017 . Did anyone had the same problem and found a solution for it?
mongodb, meteor, deployment, redhat
2
430
0
https://stackoverflow.com/questions/35164985/deploying-meteor-application-throws-mongoerror-authentication-failed
34,256,397
Composer behind Proxy with Authentication
I'm using RHEL 6 behind a company proxy. I've set up the env variable (in csh) as follows: setenv http_proxy "[URL] setenv https_proxy "[URL] When running composer, I get the following errors: Failed to enable crypto The proxy settings worked fine with curl or wget, but fails with composer. Is this a bug with composer? Is there a non ad-hoc way of making this work?
Composer behind Proxy with Authentication I'm using RHEL 6 behind a company proxy. I've set up the env variable (in csh) as follows: setenv http_proxy "[URL] setenv https_proxy "[URL] When running composer, I get the following errors: Failed to enable crypto The proxy settings worked fine with curl or wget, but fails with composer. Is this a bug with composer? Is there a non ad-hoc way of making this work?
php, proxy, composer-php, redhat, rhel
2
912
1
https://stackoverflow.com/questions/34256397/composer-behind-proxy-with-authentication
32,301,221
rsync: @ERROR: auth failed on module tomcat_backup
I just can't figure out what's going on with my RSync. I'm running RSync on RHEL5, ip = xx.xx.xx.97. It's getting files from RHEL5, ip = xx.xx.xx.96. Here's what the log (which I specified on the RSync command line) shows on xx.97 (the one requesting the files): (local time) 2015/08/30 13:40:01 [17353] @ERROR: auth failed on module tomcat_backup 2015/08/30 13:40:01 [17353] rsync error: error starting client-server protocol (code 5) at main.c(1530) [receiver=3.0.6] Here's what the log(which is specified in the rsyncd.conf file) shows on xx.96 (the one supplying the files): (UTC time) 2015/08/30 07:40:01 [8836] name lookup failed for xx.xx.xx.97: Name or service not known 2015/08/30 07:40:01 [8836] connect from UNKNOWN (xx.xx.xx.97) 2015/08/30 07:40:01 [8836] auth failed on module tomcat_backup from unknown (xx.xx.xx.97): password mismatch Here's the actual rsync.sh command called from xx.xx.xx.97 (the requester): export RSYNC_PASSWORD=rsyncclient rsync -havz --log-file=/usr/local/bin/RSync/test.log rsync://rsyncclient@xx.xx.xx.96/tomcat_backup/ProcessSniffer/ /usr/local/bin/ProcessSniffer Here's the rsyncd.conf on xx.xx.xx.97: lock file = /var/run/rsync.lock log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid [files] name = tomcat_backup path = /usr/local/bin/ comment = The copy/backup of tomcat from .96 uid = tomcat gid = tomcat read only = no list = yes auth users = rsyncclient secrets file = /etc/rsyncd.secrets hosts allow = xx.xx.xx.96/255.255.255.0 Here's the rsyncd.secrets on xx.xx.xx.97: files:files Here's the rsyncd.conf on xx.xx.xx.96 (the supplier of files): Note: there is a 'cwrsync' (Windows version of rsync) successfully calling for files also (xx.xx.xx.100) Note: yes, there is the possibility of xx.96 requesting files from xx.97. However, this is NOT actually happening. It's commented out of the init.d mechanism. lock file = /var/run/rsync.lock log file = /var/log/rsync.log pid file = /var/run/rsync.pid strict modes = false [files] name = tomcat_backup path = /usr/local/bin comment = The copy/backup of tomcat from xx.97 uid = tomcat gid = tomcat read only = no list = yes auth users = rsyncclient secrets file = /etc/rsyncd.secrets hosts allow = xx.xx.xx.97/255.255.255.0, xx.xx.xx.100/255.255.255.0 Here's the rsyncd.secrets on xx.xx.xx.97: files:files
rsync: @ERROR: auth failed on module tomcat_backup I just can't figure out what's going on with my RSync. I'm running RSync on RHEL5, ip = xx.xx.xx.97. It's getting files from RHEL5, ip = xx.xx.xx.96. Here's what the log (which I specified on the RSync command line) shows on xx.97 (the one requesting the files): (local time) 2015/08/30 13:40:01 [17353] @ERROR: auth failed on module tomcat_backup 2015/08/30 13:40:01 [17353] rsync error: error starting client-server protocol (code 5) at main.c(1530) [receiver=3.0.6] Here's what the log(which is specified in the rsyncd.conf file) shows on xx.96 (the one supplying the files): (UTC time) 2015/08/30 07:40:01 [8836] name lookup failed for xx.xx.xx.97: Name or service not known 2015/08/30 07:40:01 [8836] connect from UNKNOWN (xx.xx.xx.97) 2015/08/30 07:40:01 [8836] auth failed on module tomcat_backup from unknown (xx.xx.xx.97): password mismatch Here's the actual rsync.sh command called from xx.xx.xx.97 (the requester): export RSYNC_PASSWORD=rsyncclient rsync -havz --log-file=/usr/local/bin/RSync/test.log rsync://rsyncclient@xx.xx.xx.96/tomcat_backup/ProcessSniffer/ /usr/local/bin/ProcessSniffer Here's the rsyncd.conf on xx.xx.xx.97: lock file = /var/run/rsync.lock log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid [files] name = tomcat_backup path = /usr/local/bin/ comment = The copy/backup of tomcat from .96 uid = tomcat gid = tomcat read only = no list = yes auth users = rsyncclient secrets file = /etc/rsyncd.secrets hosts allow = xx.xx.xx.96/255.255.255.0 Here's the rsyncd.secrets on xx.xx.xx.97: files:files Here's the rsyncd.conf on xx.xx.xx.96 (the supplier of files): Note: there is a 'cwrsync' (Windows version of rsync) successfully calling for files also (xx.xx.xx.100) Note: yes, there is the possibility of xx.96 requesting files from xx.97. However, this is NOT actually happening. It's commented out of the init.d mechanism. lock file = /var/run/rsync.lock log file = /var/log/rsync.log pid file = /var/run/rsync.pid strict modes = false [files] name = tomcat_backup path = /usr/local/bin comment = The copy/backup of tomcat from xx.97 uid = tomcat gid = tomcat read only = no list = yes auth users = rsyncclient secrets file = /etc/rsyncd.secrets hosts allow = xx.xx.xx.97/255.255.255.0, xx.xx.xx.100/255.255.255.0 Here's the rsyncd.secrets on xx.xx.xx.97: files:files
redhat, rsync
2
11,672
2
https://stackoverflow.com/questions/32301221/rsync-error-auth-failed-on-module-tomcat-backup
32,217,151
hacking gnome 3 from gnome-shell.css file
I wanted to hack gnome, gnome 3 to be specific, on red hat 7. I wanted to get rid of the top panel in its totality. Im new to this, so I looked around and went to the gnome-shell.css file. There I found something called "panel" which looks like the only possible place for what im thinking is that top bar on the desktop. In here i wrote "display : none", nothing happens, the top panel is still there. Do I have to get the source code for gnome and make my modifications from there(i hope not)!
hacking gnome 3 from gnome-shell.css file I wanted to hack gnome, gnome 3 to be specific, on red hat 7. I wanted to get rid of the top panel in its totality. Im new to this, so I looked around and went to the gnome-shell.css file. There I found something called "panel" which looks like the only possible place for what im thinking is that top bar on the desktop. In here i wrote "display : none", nothing happens, the top panel is still there. Do I have to get the source code for gnome and make my modifications from there(i hope not)!
css, linux, redhat, gnome
2
1,096
2
https://stackoverflow.com/questions/32217151/hacking-gnome-3-from-gnome-shell-css-file
31,572,115
how to provide trust (CA) certificate to ldapmodify on RedHat
I'm trying to use LDAP client (ldapmodify) on Redhat Linux to manipulate the contents of Active Directory which obliges me to use LDAPS. The ldapmodify quits with "Can't contact LDAP server" and the additional info is "Peer certificate issuer is not recognized". Can I specify a trust store to it? If so, how? And what format should it be? Or, perhaps, I can add the CA certificate to the default store? How can I find it? Thanks.
how to provide trust (CA) certificate to ldapmodify on RedHat I'm trying to use LDAP client (ldapmodify) on Redhat Linux to manipulate the contents of Active Directory which obliges me to use LDAPS. The ldapmodify quits with "Can't contact LDAP server" and the additional info is "Peer certificate issuer is not recognized". Can I specify a trust store to it? If so, how? And what format should it be? Or, perhaps, I can add the CA certificate to the default store? How can I find it? Thanks.
ssl, ldap, redhat
2
305
0
https://stackoverflow.com/questions/31572115/how-to-provide-trust-ca-certificate-to-ldapmodify-on-redhat
31,402,326
libldap-2.4.so.2: cannot open shared object file: No such file or directory
I was trying to uninstall openldap but now what ever command i am trying with yum i am getting There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: libldap-2.4.so.2: cannot open shared object file: No such file or directory Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.6.6 (r266:84292, May 1 2012, 13:52:17) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] If you cannot solve this problem yourself, please go to the yum faq at: [URL] Can anyone tell me how to fix this issue
libldap-2.4.so.2: cannot open shared object file: No such file or directory I was trying to uninstall openldap but now what ever command i am trying with yum i am getting There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: libldap-2.4.so.2: cannot open shared object file: No such file or directory Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.6.6 (r266:84292, May 1 2012, 13:52:17) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] If you cannot solve this problem yourself, please go to the yum faq at: [URL] Can anyone tell me how to fix this issue
linux, redhat, openldap
2
8,243
0
https://stackoverflow.com/questions/31402326/libldap-2-4-so-2-cannot-open-shared-object-file-no-such-file-or-directory
31,377,830
make:No rule to make target install. Stop for openldap
I Just unzip openldap in /opt/openldap directory and then trying to run following commands make depend make make test make install but i am getting below exceptions check the added screen shot EDIT:- output of ./configure command EDIT1 :- yum install gcc also giving below exception
make:No rule to make target install. Stop for openldap I Just unzip openldap in /opt/openldap directory and then trying to run following commands make depend make make test make install but i am getting below exceptions check the added screen shot EDIT:- output of ./configure command EDIT1 :- yum install gcc also giving below exception
linux, makefile, redhat
2
3,164
1
https://stackoverflow.com/questions/31377830/makeno-rule-to-make-target-install-stop-for-openldap
30,604,042
Deploying meteor app to intranet
I am needing some help with deploying this meteor app I made onto our works INTRANET server. I bundled the app to make it a node app and installed node,npm,and mongodb onto the intranet server. When I go through setting up the environment variables with export PORT=3000 export MONGO_URL=mongodb://localhost:27017/databasename export ROOT_URL=[URL] npm install node bundle/main.js I get a blank web page. In Apache I set up a virtual host in /etc/httpd/conf/httpd.conf such as <VirtualHost *:80> ServerName servername.dcn ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /timesheet [URL] < /VirtualHost> Any ideas as to why I'm getting just a plain blank page instead of the app? Thanks for any advice.
Deploying meteor app to intranet I am needing some help with deploying this meteor app I made onto our works INTRANET server. I bundled the app to make it a node app and installed node,npm,and mongodb onto the intranet server. When I go through setting up the environment variables with export PORT=3000 export MONGO_URL=mongodb://localhost:27017/databasename export ROOT_URL=[URL] npm install node bundle/main.js I get a blank web page. In Apache I set up a virtual host in /etc/httpd/conf/httpd.conf such as <VirtualHost *:80> ServerName servername.dcn ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /timesheet [URL] < /VirtualHost> Any ideas as to why I'm getting just a plain blank page instead of the app? Thanks for any advice.
node.js, apache, meteor, redhat
2
397
0
https://stackoverflow.com/questions/30604042/deploying-meteor-app-to-intranet
30,309,293
LSB init service dependency
I have added the two services A and B. B is dependent on A means if i will start B then A should be start automatically if it is not running already. But A is not coming up automatically when i am starting B. Can you please tell where am i wrong ?. I have mentioned the Init scripts for both the services below. B Init script: #!/bin/bash # Author: Jsingh <jsingh@sandvine.com> # chkconfig: 2345 95 05 # processname: B # config: /usr/local/etc/rc.conf # pidfile: /var/run/B.pid ### BEGIN INIT INFO # Provides: B # Required-Start: $local_fs $network A # Required-Stop: $local_fs $network A # Should-Start: # Should-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: start and stop System daemon # Description: ### END INIT INFO A Init Script: #!/bin/bash # Author: Jsingh <jsingh@sandvine.com> # chkconfig: 2345 90 10 # processname: A # config: /usr/local/etc/rc.conf # pidfile: /var/run/A.pid ### BEGIN INIT INFO # Provides: A # Required-Start: $local_fs $network # Required-Stop: $local_fs $network # Should-Start: # Should-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: start and stop System daemon # Description: ### END INIT INFO
LSB init service dependency I have added the two services A and B. B is dependent on A means if i will start B then A should be start automatically if it is not running already. But A is not coming up automatically when i am starting B. Can you please tell where am i wrong ?. I have mentioned the Init scripts for both the services below. B Init script: #!/bin/bash # Author: Jsingh <jsingh@sandvine.com> # chkconfig: 2345 95 05 # processname: B # config: /usr/local/etc/rc.conf # pidfile: /var/run/B.pid ### BEGIN INIT INFO # Provides: B # Required-Start: $local_fs $network A # Required-Stop: $local_fs $network A # Should-Start: # Should-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: start and stop System daemon # Description: ### END INIT INFO A Init Script: #!/bin/bash # Author: Jsingh <jsingh@sandvine.com> # chkconfig: 2345 90 10 # processname: A # config: /usr/local/etc/rc.conf # pidfile: /var/run/A.pid ### BEGIN INIT INFO # Provides: A # Required-Start: $local_fs $network # Required-Stop: $local_fs $network # Should-Start: # Should-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: start and stop System daemon # Description: ### END INIT INFO
linux, unix, redhat, init, lsb
2
112
0
https://stackoverflow.com/questions/30309293/lsb-init-service-dependency
30,018,009
How can I use C++11/14 and target RHEL 5.5?
I'm having difficulty finding any docs that list the valid versions of linux supported by Clang and GCC. Can I use either of them to build C++11/14 source for Red Hat Enterprise Linux 5.5? EDIT: The specific problem that I have been having is that recent versions of the libraries don't work, at least not the binary releases. I was hoping to scope the needed work by finding something like a list of incompatible libraries, so that I didn't have to discover them one by one.
How can I use C++11/14 and target RHEL 5.5? I'm having difficulty finding any docs that list the valid versions of linux supported by Clang and GCC. Can I use either of them to build C++11/14 source for Red Hat Enterprise Linux 5.5? EDIT: The specific problem that I have been having is that recent versions of the libraries don't work, at least not the binary releases. I was hoping to scope the needed work by finding something like a list of incompatible libraries, so that I didn't have to discover them one by one.
c++, linux, c++11, redhat
2
311
0
https://stackoverflow.com/questions/30018009/how-can-i-use-c11-14-and-target-rhel-5-5