question_id
int64
82.3k
79.7M
title_clean
stringlengths
15
158
body_clean
stringlengths
62
28.5k
full_text
stringlengths
95
28.5k
tags
stringlengths
4
80
score
int64
0
1.15k
view_count
int64
22
1.62M
answer_count
int64
0
30
link
stringlengths
58
125
24,097,377
PHP - Unable to load dynamic library '/usr/lib64/php/modules/
I am encountering the following errors when I try to run my webpage, which has a php script embedded to call a mysql database: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/dbase.so' - /usr/lib64/php/modules/dbase.so: undefined symbol: core_globals in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/mysql.so' - /usr/lib64/php/modules/mysql.so: undefined symbol: executor_globals in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/mysqli.so' - /usr/lib64/php/modules/mysqli.so: undefined symbol: executor_globals in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/pdo.so' - /usr/lib64/php/modules/pdo.so: undefined symbol: executor_globals in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/pdo_mysql.so' - /usr/lib64/php/modules/pdo_mysql.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/pdo_sqlite.so' - /usr/lib64/php/modules/pdo_sqlite.so: undefined symbol: executor_globals in Unknown on line 0 [notice] Apache/2.2.3 (Red Hat) configured -- resuming normal operations PHP Fatal error: Call to undefined function mysqli_connect() in /var/www/html/index.php on line 11 I have checked my php.ini file and have verified the extension_dir directive references the correct directory i.e. /usr/lib64/php/modules/ Is anyone able to shed some light on why these errors are occurring?
PHP - Unable to load dynamic library '/usr/lib64/php/modules/ I am encountering the following errors when I try to run my webpage, which has a php script embedded to call a mysql database: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/dbase.so' - /usr/lib64/php/modules/dbase.so: undefined symbol: core_globals in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/mysql.so' - /usr/lib64/php/modules/mysql.so: undefined symbol: executor_globals in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/mysqli.so' - /usr/lib64/php/modules/mysqli.so: undefined symbol: executor_globals in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/pdo.so' - /usr/lib64/php/modules/pdo.so: undefined symbol: executor_globals in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/pdo_mysql.so' - /usr/lib64/php/modules/pdo_mysql.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/pdo_sqlite.so' - /usr/lib64/php/modules/pdo_sqlite.so: undefined symbol: executor_globals in Unknown on line 0 [notice] Apache/2.2.3 (Red Hat) configured -- resuming normal operations PHP Fatal error: Call to undefined function mysqli_connect() in /var/www/html/index.php on line 11 I have checked my php.ini file and have verified the extension_dir directive references the correct directory i.e. /usr/lib64/php/modules/ Is anyone able to shed some light on why these errors are occurring?
php, mysql, linux, apache, redhat
3
16,730
1
https://stackoverflow.com/questions/24097377/php-unable-to-load-dynamic-library-usr-lib64-php-modules
16,901,118
How do I enforce RPM requires order
I have spent all day trying various things and made no progress whatsoever. I am compiling an rpm package for my application (MyApp.rpm), for RHEL6 64-bit, which requires a third party, 32-bit driver package called aksusbd.rpm. Now, aksusbd.rpm in turn requires compatibility mode, provided on RHEL6 by glibc.i686.rpm. So somewhere in my spec file for MyApp.rpm I have: MyApp.spec Requires: glibc(x86-32) Requires: aksusbd >= 1.14 What it does during installation (yum install MyApp) is, installs aksusbd first, which fails with no 32-bit compatibility installed. Then just to tease me, immediately after installs glibc. So when its all over I can type yum install aksusbd and it works this time because glibc is now installed. How on earth do I teach it to do better than this! (growl)
How do I enforce RPM requires order I have spent all day trying various things and made no progress whatsoever. I am compiling an rpm package for my application (MyApp.rpm), for RHEL6 64-bit, which requires a third party, 32-bit driver package called aksusbd.rpm. Now, aksusbd.rpm in turn requires compatibility mode, provided on RHEL6 by glibc.i686.rpm. So somewhere in my spec file for MyApp.rpm I have: MyApp.spec Requires: glibc(x86-32) Requires: aksusbd >= 1.14 What it does during installation (yum install MyApp) is, installs aksusbd first, which fails with no 32-bit compatibility installed. Then just to tease me, immediately after installs glibc. So when its all over I can type yum install aksusbd and it works this time because glibc is now installed. How on earth do I teach it to do better than this! (growl)
redhat, rpm, rpm-spec
3
2,668
4
https://stackoverflow.com/questions/16901118/how-do-i-enforce-rpm-requires-order
14,966,135
How to install liquibase on redhat linux
Thought this might help others. If you are running a headless VM it might not be immediately evident how to install liquibase. I was using a redhat linux box and wondering which command to try to install liquibase.
How to install liquibase on redhat linux Thought this might help others. If you are running a headless VM it might not be immediately evident how to install liquibase. I was using a redhat linux box and wondering which command to try to install liquibase.
redhat, liquibase, headless
3
7,131
1
https://stackoverflow.com/questions/14966135/how-to-install-liquibase-on-redhat-linux
13,949,327
Error configuring Network Audio System [NAS] in RHEL 6 x64
I tried to setup NAS (Network Audio System ) in RHEL 6 by two methods: First, by RPM install, [root@localhost ~]# rpm -Uvh nas-1.9.2-1.el6.x86_64.rpm nas-libs-1.9.2-1.el6.x86_64.rpm it gets installed, but I cannot find the service in /etc/init.d/ directory. only /etc/nas/nasd.conf file gets created. And if I run the command [root@localhost ~]# nasd Network Audio System Release 1.9.2 Network Audio System Release 1.9.2 Init: Output open(/dev/dsp) failed: No such file or directory Fatal server error: could not create audio connection block info Secondly, by Configuring latest tar-ball nas-1.9.3.src.tar.gz provided by NAS site. but the problem is same. Please help me to install this properly, as I want to get enable the audio for qt based applications, and qt uses NAS for its audio functionalities.
Error configuring Network Audio System [NAS] in RHEL 6 x64 I tried to setup NAS (Network Audio System ) in RHEL 6 by two methods: First, by RPM install, [root@localhost ~]# rpm -Uvh nas-1.9.2-1.el6.x86_64.rpm nas-libs-1.9.2-1.el6.x86_64.rpm it gets installed, but I cannot find the service in /etc/init.d/ directory. only /etc/nas/nasd.conf file gets created. And if I run the command [root@localhost ~]# nasd Network Audio System Release 1.9.2 Network Audio System Release 1.9.2 Init: Output open(/dev/dsp) failed: No such file or directory Fatal server error: could not create audio connection block info Secondly, by Configuring latest tar-ball nas-1.9.3.src.tar.gz provided by NAS site. but the problem is same. Please help me to install this properly, as I want to get enable the audio for qt based applications, and qt uses NAS for its audio functionalities.
linux, qt, audio, redhat
3
799
2
https://stackoverflow.com/questions/13949327/error-configuring-network-audio-system-nas-in-rhel-6-x64
13,860,540
Kernel module signing for CentOS 6.3
I have exactly same problem as described in [URL] , when compiling kernel and booting from it. The bug is marked as closed and there's comment saying no change is required. I don't understand what is the solution: Should I get extrakeys.pub which is for CentOS? OR Should I replace "CentOs" by "redhat" in gpg --homedir . --export --keyring ./kernel.pub CentOS > extract.pub Any ideas?
Kernel module signing for CentOS 6.3 I have exactly same problem as described in [URL] , when compiling kernel and booting from it. The bug is marked as closed and there's comment saying no change is required. I don't understand what is the solution: Should I get extrakeys.pub which is for CentOS? OR Should I replace "CentOs" by "redhat" in gpg --homedir . --export --keyring ./kernel.pub CentOS > extract.pub Any ideas?
linux-kernel, centos, redhat, rpm, rpmbuild
3
1,245
1
https://stackoverflow.com/questions/13860540/kernel-module-signing-for-centos-6-3
12,979,714
Build RPM package to selectively deploy sources
I'm reasonably new to RPM but i've been playing with it and need to do something a bit left field. I must obey rules which say that I must use the same rpm package in each environment, I cannot use %pre and %post to modify files. The problem is that my install is not doing a make, infact I'm copying a file structure of text files and xml files. However these files contain environment specific code, but sadly I must follow the guidelines. The 'solution' I have considered is to use several source files, source0 being dev, source1 being test and source2 being production while source3 is a disaster recovery. Each source extracts to a folder with the environment name (this is desired!) $deploy_folder/dev_code $deploy_folder/test_code $deploy_folder/prd_code I will be given an environment variable which tells me the environment. Thus far I have deployed all sources and then removed the unnecessary folders using a condition if [[ $env_variable == "PRD" ]] ; then rm -rf $buildroot/install/$deploy_folder/dev_code rm -rf $buildroot/install/$deploy_folder/test_code fi *i've simplified the variables above somewhat This appears to work at build time, however when I perform an rpm -i it does not deploy all code then remove the other folders at the final destination. Clearly, I'm probably not using RPM in the correct spirit, so am i doing this the correct way? is there a better way given my files are essentially all environment specific? How do I access what code is deployed to the final destination? Thanks
Build RPM package to selectively deploy sources I'm reasonably new to RPM but i've been playing with it and need to do something a bit left field. I must obey rules which say that I must use the same rpm package in each environment, I cannot use %pre and %post to modify files. The problem is that my install is not doing a make, infact I'm copying a file structure of text files and xml files. However these files contain environment specific code, but sadly I must follow the guidelines. The 'solution' I have considered is to use several source files, source0 being dev, source1 being test and source2 being production while source3 is a disaster recovery. Each source extracts to a folder with the environment name (this is desired!) $deploy_folder/dev_code $deploy_folder/test_code $deploy_folder/prd_code I will be given an environment variable which tells me the environment. Thus far I have deployed all sources and then removed the unnecessary folders using a condition if [[ $env_variable == "PRD" ]] ; then rm -rf $buildroot/install/$deploy_folder/dev_code rm -rf $buildroot/install/$deploy_folder/test_code fi *i've simplified the variables above somewhat This appears to work at build time, however when I perform an rpm -i it does not deploy all code then remove the other folders at the final destination. Clearly, I'm probably not using RPM in the correct spirit, so am i doing this the correct way? is there a better way given my files are essentially all environment specific? How do I access what code is deployed to the final destination? Thanks
packaging, redhat, rpm, rhel
3
2,160
1
https://stackoverflow.com/questions/12979714/build-rpm-package-to-selectively-deploy-sources
11,182,095
antiword doesn't work on hosted server
I guess it could be a stupid question but it has taken me hours. On a redhat Linux server, I wrote a webpage which tried to call a software "antiword" which is on the same server. antiword is located at /home/myusername/bin, and needs directory /home/myusername/.antiword to run. when I run my webpage in the browser, it searched for /.antiword instead of /home/myusername/.antiword So it said directory not found. How do I fixed the problem? One thing to clarify antiword is the program name. no matter where you call it, it will search for a directory ".antiword" at the same location "/home/myusername/.antiword" btw I don't have the root account, so "ln" wouldn't work.
antiword doesn't work on hosted server I guess it could be a stupid question but it has taken me hours. On a redhat Linux server, I wrote a webpage which tried to call a software "antiword" which is on the same server. antiword is located at /home/myusername/bin, and needs directory /home/myusername/.antiword to run. when I run my webpage in the browser, it searched for /.antiword instead of /home/myusername/.antiword So it said directory not found. How do I fixed the problem? One thing to clarify antiword is the program name. no matter where you call it, it will search for a directory ".antiword" at the same location "/home/myusername/.antiword" btw I don't have the root account, so "ln" wouldn't work.
linux, directory, redhat
3
2,484
1
https://stackoverflow.com/questions/11182095/antiword-doesnt-work-on-hosted-server
2,705,424
How do I build git on Red Hat Enterprise Linux 3?
When you try to build git v1.7.0.6 on Red Hat Enterprise Linux 3, you get an error: In file included from /usr/include/openssl/ssl.h:179, from git-compat-util.h:139, from builtin.h:4, from fast-import.c:147: /usr/include/openssl/kssl.h:72:18: krb5.h: No such file or directory I have the answer to this, and I'm just posting it here for posterity.
How do I build git on Red Hat Enterprise Linux 3? When you try to build git v1.7.0.6 on Red Hat Enterprise Linux 3, you get an error: In file included from /usr/include/openssl/ssl.h:179, from git-compat-util.h:139, from builtin.h:4, from fast-import.c:147: /usr/include/openssl/kssl.h:72:18: krb5.h: No such file or directory I have the answer to this, and I'm just posting it here for posterity.
linux, git, redhat
3
718
1
https://stackoverflow.com/questions/2705424/how-do-i-build-git-on-red-hat-enterprise-linux-3
74,051,390
Blob Storage Permament mounting in redhat linux
I have a linux server where i had mounted blog storage but it is temporary mount everytime i restart the machine i have to run this below command manually sudo blobfuse /sfp/publicstorage134/blobstorage123 --tmp-path=/mnt/rec/mountpath --config-file=/user1/connection_sf.cfg -o attr_timeout=180 -o entry_timeout=120 -o negative_timeout=180 -o allow_other How can i make this stoarge mount permanently instead of mounting with this command after every restart. Is it possible to put this in /etc/fstab?
Blob Storage Permament mounting in redhat linux I have a linux server where i had mounted blog storage but it is temporary mount everytime i restart the machine i have to run this below command manually sudo blobfuse /sfp/publicstorage134/blobstorage123 --tmp-path=/mnt/rec/mountpath --config-file=/user1/connection_sf.cfg -o attr_timeout=180 -o entry_timeout=120 -o negative_timeout=180 -o allow_other How can i make this stoarge mount permanently instead of mounting with this command after every restart. Is it possible to put this in /etc/fstab?
linux, azure-blob-storage, redhat, mount, fuse
3
1,412
1
https://stackoverflow.com/questions/74051390/blob-storage-permament-mounting-in-redhat-linux
61,828,033
Where can I find the missing dependencies for git-svn for redhat 7
I am trying to install git-svn on redhat 7.2 but the yum install fails with some missing dependencies. Error: Package: git222-perl-Git-SVN-2.22.3-1.el7.ius.noarch (ius) Requires: perl(SVN::Ra) Error: Package: git222-perl-Git-SVN-2.22.3-1.el7.ius.noarch (ius) Requires: perl(SVN::Delta) Error: Package: git222-perl-Git-SVN-2.22.3-1.el7.ius.noarch (ius) Requires: perl(SVN::Core) yum provides SVN:Core Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager epel/x86_64/filelists_db rhel-7-workstation-rpms/7Workstation/x86_64/filelists_db slack/filelists No matches found I found a few pages on installing svn-git that recommend creating a repository, but I am getting 404's on the URLs when I try to use them. I had a similar problem with git 2.x that I got around by building from a tarball.I have not been able to find a git-svn tarball. Can someone provide a sample repository that will resolve those dependencies?
Where can I find the missing dependencies for git-svn for redhat 7 I am trying to install git-svn on redhat 7.2 but the yum install fails with some missing dependencies. Error: Package: git222-perl-Git-SVN-2.22.3-1.el7.ius.noarch (ius) Requires: perl(SVN::Ra) Error: Package: git222-perl-Git-SVN-2.22.3-1.el7.ius.noarch (ius) Requires: perl(SVN::Delta) Error: Package: git222-perl-Git-SVN-2.22.3-1.el7.ius.noarch (ius) Requires: perl(SVN::Core) yum provides SVN:Core Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager epel/x86_64/filelists_db rhel-7-workstation-rpms/7Workstation/x86_64/filelists_db slack/filelists No matches found I found a few pages on installing svn-git that recommend creating a repository, but I am getting 404's on the URLs when I try to use them. I had a similar problem with git 2.x that I got around by building from a tarball.I have not been able to find a git-svn tarball. Can someone provide a sample repository that will resolve those dependencies?
git, svn, redhat, yum
3
1,401
1
https://stackoverflow.com/questions/61828033/where-can-i-find-the-missing-dependencies-for-git-svn-for-redhat-7
58,712,734
How to fix: "Testing pyext configuration : Could not build python extensions"
I am trying to install wxPython but the wheel build fails. The error message is not helpful in indicating what to do or where to look to fix this. Can anyone please help me understand how to build this wheel correctly? Machine: Linux on Power (this is not x86) OS: RHEL Server, 7.5 (Maipo) python version: Python 3.6.4 pip3 version: pip 19.3.1 I noticed this stack overflow post , which is also not helpful because my linux release is not on the list of the ones provided. Following links above I tried wxPython download page and the following install with pip but in step 5 basically tells you "look at the log and figure it out"....not helpful. I tried to manually hack the wxPython package using my very limited competence and removed some dependency.....still nothing. <...> Finished command: build_wx (1m56.907s) Running command: build_py Checking for /tmp/pip-req-build-dgnp13sp/bin/waf-2.0.8... "/afs/apd.pok.ibm.com/u/mfacchin/wxenvlop/bin/python3" /tmp/pip-req-build-dgnp13sp/bin/waf-2.0.8 --wx_config=/tmp/pip-req-build-dgnp13sp/build/wxbld/gtk3/wx-config --gtk3 --python="/afs/apd.pok.ibm.com/u/mfacchin/wxenvlop/bin/python3" --out=build/waf/3.6/gtk3 configure build Setting top to : /tmp/pip-req-build-dgnp13sp Setting out to : /tmp/pip-req-build-dgnp13sp/build/waf/3.6/gtk3 Checking for 'gcc' (C compiler) : /bin/gcc Checking for 'g++' (C++ compiler) : /bin/g++ Checking for program 'python' : /afs/apd.pok.ibm.com/u/mfacchin/wxenvlop/bin/python3 Checking for python version >= 2.7.0 : 3.6.4 python-config : /opt/xsite/cte/tools/python/3.6/bin/python3.6-config Asking python-config for pyext '--cflags --libs --ldflags' flags : yes Testing pyext configuration : Could not build python extensions The configuration failed (complete log in /tmp/pip-req-build-dgnp13sp/build/waf/3.6/gtk3/config.log) Command '"/afs/apd.pok.ibm.com/u/mfacchin/wxenvlop/bin/python3" /tmp/pip-req-build-dgnp13sp/bin/waf-2.0.8 --wx_config=/tmp/pip-req-build-dgnp13sp/build/wxbld/gtk3/wx-config --gtk3 --python="/afs/apd.pok.ibm.com/u/mfacchin/wxenvlop/bin/python3" --out=build/waf/3.6/gtk3 configure build ' failed with exit code 1. Finished command: build_py (0m6.991s) Finished command: build (2m3.899s) Command '"/afs/apd.pok.ibm.com/u/mfacchin/wxenvlop/bin/python3" -u build.py build' failed with exit code 1. Building wheel for wxPython (setup.py): finished with status 'error' ERROR: Failed building wheel for wxPython <...> ----Update 12/2 (After Robin Dunn's feedback) Thank you Robin for the directives. Following is the last portion of the config.log from a different run using the build command. Also the error message is slightly different (below, following the config.log), because I had previously used the explicit wheel-build command: pip wheel -v wxPython-4.0.7.post1.tar.gz 2>&1 | tee build.log. Does this log below confirm your theory regarding the Python built with the --enable-shared configure flag? Testing pyext configuration ==> #include <Python.h> #ifdef __cplusplus extern "C" { #endif void Py_Initialize(void); void Py_Finalize(void); #ifdef __cplusplus } #endif int main(int argc, char **argv) { (void)argc; (void)argv; Py_Initialize(); Py_Finalize(); return 0; } <== [1/2] Compiling [32mbuild/waf/3.6/gtk3/.conf_check_cfc3ecfbbf37890054f6518ca7961071/test.cpp[0m ['/bin/g++', '-fPIC', '-g', '-fwrapv', '-O3', '-I../../../../../../../../../../../../../../cte/tools/python/vol2/.3.6.4-linux-ppc64le/include/python3.6m', '-I/opt/xsite/cte/tools/python/common2018/include', '-DPYTHONDIR="/usr/local/lib/python3.6/site-packages"', '-DPYTHONARCHDIR="/usr/local/lib/python3.6/site-packages"', '-DNDEBUG', '../test.cpp', '-c', '-o/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/build/waf/3.6/gtk3/.conf_check_cfc3ecfbbf37890054f6518ca7961071/testbuild/test.cpp.1.o'] [2/2] Linking [33mbuild/waf/3.6/gtk3/.conf_check_cfc3ecfbbf37890054f6518ca7961071/testbuild/testprog.cpython-36m-powerpc64le-linux-gnu.so[0m ['/bin/g++', '-shared', '-Xlinker', '-export-dynamic', 'test.cpp.1.o', '-o', '/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/build/waf/3.6/gtk3/.conf_check_cfc3ecfbbf37890054f6518ca7961071/testbuild/testprog.cpython-36m-powerpc64le-linux-gnu.so', '-Wl,-Bstatic', '-Wl,-Bdynamic', '-L/afs/apd.pok.ibm.com/func/vlsi/cte/tools/python/vol2/.3.6.4-linux-ppc64le/lib/python3.6/config-3.6m-powerpc64le-linux-gnu', '-L/afs/apd.pok.ibm.com/func/vlsi/cte/tools/python/vol2/.3.6.4-linux-ppc64le/lib', '-lpython3.6m', '-lpthread', '-ldl', '-lutil', '-lm', '-lpython3.6m', '-lpthread', '-ldl', '-lutil', '-lm'] err: /bin/ld: /afs/apd.pok.ibm.com/func/vlsi/cte/tools/python/vol2/.3.6.4-linux-ppc64le/lib/python3.6/config-3.6m-powerpc64le-linux-gnu/libpython3.6m.a(Python-ast.o): In function obj2ast_keyword': /data/ubrandt/Python-3.6.4/Python/Python-ast.c:7767:(.text.unlikely+0x608): call to _Py_keyword' lacks nop, can't restore toc; recompile with -fPIC /bin/ld: /afs/apd.pok.ibm.com/func/vlsi/cte/tools/python/vol2/.3.6.4-linux-ppc64le/lib/python3.6/config-3.6m-powerpc64le-linux-gnu/libpython3.6m.a(Python-ast.o): In function obj2ast_comprehension': /data/ubrandt/Python-3.6.4/Python/Python-ast.c:7419:(.text.unlikely+0x9f4): call to _Py_comprehension' lacks nop, can't restore toc; recompile with -fPIC /bin/ld: /afs/apd.pok.ibm.com/func/vlsi/cte/tools/python/vol2/.3.6.4-linux-ppc64le/lib/python3.6/config-3.6m-powerpc64le-linux-gnu/libpython3.6m.a(Python-ast.o): In function obj2ast_alias': /data/ubrandt/Python-3.6.4/Python/Python-ast.c:7802:(.text.unlikely+0xbec): call to _Py_alias' lacks nop, can't restore toc; recompile with -fPIC /bin/ld: /afs/apd.pok.ibm.com/func/vlsi/cte/tools/python/vol2/.3.6.4-linux-ppc64le/lib/python3.6/config-3.6m-powerpc64le-linux-gnu/libpython3.6m.a(Python-ast.o): In function obj2ast_withitem': /data/ubrandt/Python-3.6.4/Python/Python-ast.c:7837:(.text.unlikely+0xdd4): call to _Py_withitem' lacks nop, can't restore toc; recompile with -fPIC /bin/ld: final link failed: Bad value collect2: error: ld returned 1 exit status from /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1: Test does not build: Traceback (most recent call last): File "/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/bin/.waf3-2.0.8-206f2b7a89029e71942a2beb9e1bbbbd/waflib/Configure.py", line 324, in run_build bld.compile() File "/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/bin/.waf3-2.0.8-206f2b7a89029e71942a2beb9e1bbbbd/waflib/Build.py", line 176, in compile raise Errors.BuildError(self.producer.error) waflib.Errors.BuildError: Build failed -> task in 'testprog' failed with exit status 1 (run with -v to display more information) Could not build python extensions from /.....: The configuration failed and this is the error message that I get this new run, slightly different msgfmt --verbose -c -o zh_TW.mo zh_TW.po 1710 translated messages, 82 fuzzy translations, 61 untranslated messages. make[1]: Leaving directory `/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/ext/wxWidgets/locale' Setting top to : /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1 Setting out to : /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/build/waf/3.6/gtk3 Checking for 'gcc' (C compiler) : /bin/gcc Checking for 'g++' (C++ compiler) : /bin/g++ Checking for program 'python' : /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/bin/python3 Checking for python version >= 2.7.0 : 3.6.4 python-config : /opt/xsite/cte/tools/python/3.6/bin/python3.6-config Asking python-config for pyext '--cflags --libs --ldflags' flags : yes Testing pyext configuration : Could not build python extensions The configuration failed (complete log in /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/build/waf/3.6/gtk3/config.log) Will build using: "/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/bin/python3" 3.6.4 (default, Feb 12 2018, 16:08:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] Python's architecture is 64bit cfg.VERSION: 4.0.7.post1 Running command: build Running command: build_wx wxWidgets build options: ['--wxpython', '--unicode', '--gtk3'] Configure options: ['--enable-unicode', '--with-gtk=3', '--enable-sound', '--enable-graphics_ctx', '--enable-display', '--enable-geometry', '--enable-debug_flag', '--enable-optimise', '--disable-debugreport', '--enable-uiactionsim', '--enable-autoidman', '--with-sdl'] /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/ext/wxWidgets/configure --enable-unicode --with-gtk=3 --enable-sound --enable-graphics_ctx --enable-display --enable-geometry --enable-debug_flag --enable-optimise --disable-debugreport --enable-uiactionsim --enable-autoidman --with-sdl make --jobs=128 Building message catalogs in /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/ext/wxWidgets/locale make allmo Finished command: build_wx (12m36.623s) Running command: build_py Checking for /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/bin/waf-2.0.8... "/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/bin/python3" /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/bin/waf-2.0.8 --wx_config=/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/build/wxbld/gtk3/wx-config --gtk3 --python="/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/bin/python3" --out=build/waf/3.6/gtk3 configure build Command '"/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/bin/python3" /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/bin/waf-2.0.8 --wx_config=/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/build/wxbld/gtk3/wx-config --gtk3 --python="/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/bin/python3" --out=build/waf/3.6/gtk3 configure build ' failed with exit code 1. Finished command: build_py (2m7.118s) Finished command: build (14m43.742s)
How to fix: &quot;Testing pyext configuration : Could not build python extensions&quot; I am trying to install wxPython but the wheel build fails. The error message is not helpful in indicating what to do or where to look to fix this. Can anyone please help me understand how to build this wheel correctly? Machine: Linux on Power (this is not x86) OS: RHEL Server, 7.5 (Maipo) python version: Python 3.6.4 pip3 version: pip 19.3.1 I noticed this stack overflow post , which is also not helpful because my linux release is not on the list of the ones provided. Following links above I tried wxPython download page and the following install with pip but in step 5 basically tells you "look at the log and figure it out"....not helpful. I tried to manually hack the wxPython package using my very limited competence and removed some dependency.....still nothing. <...> Finished command: build_wx (1m56.907s) Running command: build_py Checking for /tmp/pip-req-build-dgnp13sp/bin/waf-2.0.8... "/afs/apd.pok.ibm.com/u/mfacchin/wxenvlop/bin/python3" /tmp/pip-req-build-dgnp13sp/bin/waf-2.0.8 --wx_config=/tmp/pip-req-build-dgnp13sp/build/wxbld/gtk3/wx-config --gtk3 --python="/afs/apd.pok.ibm.com/u/mfacchin/wxenvlop/bin/python3" --out=build/waf/3.6/gtk3 configure build Setting top to : /tmp/pip-req-build-dgnp13sp Setting out to : /tmp/pip-req-build-dgnp13sp/build/waf/3.6/gtk3 Checking for 'gcc' (C compiler) : /bin/gcc Checking for 'g++' (C++ compiler) : /bin/g++ Checking for program 'python' : /afs/apd.pok.ibm.com/u/mfacchin/wxenvlop/bin/python3 Checking for python version >= 2.7.0 : 3.6.4 python-config : /opt/xsite/cte/tools/python/3.6/bin/python3.6-config Asking python-config for pyext '--cflags --libs --ldflags' flags : yes Testing pyext configuration : Could not build python extensions The configuration failed (complete log in /tmp/pip-req-build-dgnp13sp/build/waf/3.6/gtk3/config.log) Command '"/afs/apd.pok.ibm.com/u/mfacchin/wxenvlop/bin/python3" /tmp/pip-req-build-dgnp13sp/bin/waf-2.0.8 --wx_config=/tmp/pip-req-build-dgnp13sp/build/wxbld/gtk3/wx-config --gtk3 --python="/afs/apd.pok.ibm.com/u/mfacchin/wxenvlop/bin/python3" --out=build/waf/3.6/gtk3 configure build ' failed with exit code 1. Finished command: build_py (0m6.991s) Finished command: build (2m3.899s) Command '"/afs/apd.pok.ibm.com/u/mfacchin/wxenvlop/bin/python3" -u build.py build' failed with exit code 1. Building wheel for wxPython (setup.py): finished with status 'error' ERROR: Failed building wheel for wxPython <...> ----Update 12/2 (After Robin Dunn's feedback) Thank you Robin for the directives. Following is the last portion of the config.log from a different run using the build command. Also the error message is slightly different (below, following the config.log), because I had previously used the explicit wheel-build command: pip wheel -v wxPython-4.0.7.post1.tar.gz 2>&1 | tee build.log. Does this log below confirm your theory regarding the Python built with the --enable-shared configure flag? Testing pyext configuration ==> #include <Python.h> #ifdef __cplusplus extern "C" { #endif void Py_Initialize(void); void Py_Finalize(void); #ifdef __cplusplus } #endif int main(int argc, char **argv) { (void)argc; (void)argv; Py_Initialize(); Py_Finalize(); return 0; } <== [1/2] Compiling [32mbuild/waf/3.6/gtk3/.conf_check_cfc3ecfbbf37890054f6518ca7961071/test.cpp[0m ['/bin/g++', '-fPIC', '-g', '-fwrapv', '-O3', '-I../../../../../../../../../../../../../../cte/tools/python/vol2/.3.6.4-linux-ppc64le/include/python3.6m', '-I/opt/xsite/cte/tools/python/common2018/include', '-DPYTHONDIR="/usr/local/lib/python3.6/site-packages"', '-DPYTHONARCHDIR="/usr/local/lib/python3.6/site-packages"', '-DNDEBUG', '../test.cpp', '-c', '-o/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/build/waf/3.6/gtk3/.conf_check_cfc3ecfbbf37890054f6518ca7961071/testbuild/test.cpp.1.o'] [2/2] Linking [33mbuild/waf/3.6/gtk3/.conf_check_cfc3ecfbbf37890054f6518ca7961071/testbuild/testprog.cpython-36m-powerpc64le-linux-gnu.so[0m ['/bin/g++', '-shared', '-Xlinker', '-export-dynamic', 'test.cpp.1.o', '-o', '/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/build/waf/3.6/gtk3/.conf_check_cfc3ecfbbf37890054f6518ca7961071/testbuild/testprog.cpython-36m-powerpc64le-linux-gnu.so', '-Wl,-Bstatic', '-Wl,-Bdynamic', '-L/afs/apd.pok.ibm.com/func/vlsi/cte/tools/python/vol2/.3.6.4-linux-ppc64le/lib/python3.6/config-3.6m-powerpc64le-linux-gnu', '-L/afs/apd.pok.ibm.com/func/vlsi/cte/tools/python/vol2/.3.6.4-linux-ppc64le/lib', '-lpython3.6m', '-lpthread', '-ldl', '-lutil', '-lm', '-lpython3.6m', '-lpthread', '-ldl', '-lutil', '-lm'] err: /bin/ld: /afs/apd.pok.ibm.com/func/vlsi/cte/tools/python/vol2/.3.6.4-linux-ppc64le/lib/python3.6/config-3.6m-powerpc64le-linux-gnu/libpython3.6m.a(Python-ast.o): In function obj2ast_keyword': /data/ubrandt/Python-3.6.4/Python/Python-ast.c:7767:(.text.unlikely+0x608): call to _Py_keyword' lacks nop, can't restore toc; recompile with -fPIC /bin/ld: /afs/apd.pok.ibm.com/func/vlsi/cte/tools/python/vol2/.3.6.4-linux-ppc64le/lib/python3.6/config-3.6m-powerpc64le-linux-gnu/libpython3.6m.a(Python-ast.o): In function obj2ast_comprehension': /data/ubrandt/Python-3.6.4/Python/Python-ast.c:7419:(.text.unlikely+0x9f4): call to _Py_comprehension' lacks nop, can't restore toc; recompile with -fPIC /bin/ld: /afs/apd.pok.ibm.com/func/vlsi/cte/tools/python/vol2/.3.6.4-linux-ppc64le/lib/python3.6/config-3.6m-powerpc64le-linux-gnu/libpython3.6m.a(Python-ast.o): In function obj2ast_alias': /data/ubrandt/Python-3.6.4/Python/Python-ast.c:7802:(.text.unlikely+0xbec): call to _Py_alias' lacks nop, can't restore toc; recompile with -fPIC /bin/ld: /afs/apd.pok.ibm.com/func/vlsi/cte/tools/python/vol2/.3.6.4-linux-ppc64le/lib/python3.6/config-3.6m-powerpc64le-linux-gnu/libpython3.6m.a(Python-ast.o): In function obj2ast_withitem': /data/ubrandt/Python-3.6.4/Python/Python-ast.c:7837:(.text.unlikely+0xdd4): call to _Py_withitem' lacks nop, can't restore toc; recompile with -fPIC /bin/ld: final link failed: Bad value collect2: error: ld returned 1 exit status from /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1: Test does not build: Traceback (most recent call last): File "/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/bin/.waf3-2.0.8-206f2b7a89029e71942a2beb9e1bbbbd/waflib/Configure.py", line 324, in run_build bld.compile() File "/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/bin/.waf3-2.0.8-206f2b7a89029e71942a2beb9e1bbbbd/waflib/Build.py", line 176, in compile raise Errors.BuildError(self.producer.error) waflib.Errors.BuildError: Build failed -> task in 'testprog' failed with exit status 1 (run with -v to display more information) Could not build python extensions from /.....: The configuration failed and this is the error message that I get this new run, slightly different msgfmt --verbose -c -o zh_TW.mo zh_TW.po 1710 translated messages, 82 fuzzy translations, 61 untranslated messages. make[1]: Leaving directory `/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/ext/wxWidgets/locale' Setting top to : /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1 Setting out to : /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/build/waf/3.6/gtk3 Checking for 'gcc' (C compiler) : /bin/gcc Checking for 'g++' (C++ compiler) : /bin/g++ Checking for program 'python' : /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/bin/python3 Checking for python version >= 2.7.0 : 3.6.4 python-config : /opt/xsite/cte/tools/python/3.6/bin/python3.6-config Asking python-config for pyext '--cflags --libs --ldflags' flags : yes Testing pyext configuration : Could not build python extensions The configuration failed (complete log in /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/build/waf/3.6/gtk3/config.log) Will build using: "/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/bin/python3" 3.6.4 (default, Feb 12 2018, 16:08:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] Python's architecture is 64bit cfg.VERSION: 4.0.7.post1 Running command: build Running command: build_wx wxWidgets build options: ['--wxpython', '--unicode', '--gtk3'] Configure options: ['--enable-unicode', '--with-gtk=3', '--enable-sound', '--enable-graphics_ctx', '--enable-display', '--enable-geometry', '--enable-debug_flag', '--enable-optimise', '--disable-debugreport', '--enable-uiactionsim', '--enable-autoidman', '--with-sdl'] /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/ext/wxWidgets/configure --enable-unicode --with-gtk=3 --enable-sound --enable-graphics_ctx --enable-display --enable-geometry --enable-debug_flag --enable-optimise --disable-debugreport --enable-uiactionsim --enable-autoidman --with-sdl make --jobs=128 Building message catalogs in /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/ext/wxWidgets/locale make allmo Finished command: build_wx (12m36.623s) Running command: build_py Checking for /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/bin/waf-2.0.8... "/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/bin/python3" /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/bin/waf-2.0.8 --wx_config=/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/build/wxbld/gtk3/wx-config --gtk3 --python="/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/bin/python3" --out=build/waf/3.6/gtk3 configure build Command '"/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/bin/python3" /afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/bin/waf-2.0.8 --wx_config=/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/wxPython-4.0.7.post1_mf1/build/wxbld/gtk3/wx-config --gtk3 --python="/afs/apd.pok.ibm.com/func/vlsi/eclipz/sf5/usr/mfacchin/c01/python_venv/wxenv191202/bin/python3" --out=build/waf/3.6/gtk3 configure build ' failed with exit code 1. Finished command: build_py (2m7.118s) Finished command: build (14m43.742s)
python, python-3.x, redhat, wxpython
3
3,129
1
https://stackoverflow.com/questions/58712734/how-to-fix-testing-pyext-configuration-could-not-build-python-extensions
58,487,893
tail -F on a symlink that gets updated
Could someone please help me understand the below behavior ? Let's say there are 2 dirs each one with 1 file inside : aaa/file1 bbb/file2 And then there's a symlink pointing to file1, e.g. : current -> aaa/file1 From 2 separate sessions I push some data into the 2 files. In one I push random numbers and in the other some constant text : while true;do echo $RANDOM >> aaa/file1 ; sleep 8; done and while true; do echo HELLO >> bbb/file2 ; sleep 8; done I then run tail -F on the symlink using the ---disable-inotify as a workaround for this bug and I see the random values as expected : tail -F ---disable-inotify current 53 27169 30599 ... From another session I then update the symlink and make it point to the other file instead : ln -sf bbb/file2 current As expected, after that the tail output has switched to : tail: ‘current’ has been replaced; following end of new file HELLO HELLO HELLO ... But now after I switch back again the symlink to point to the 1st file : ln -sf aaa/file1 current The tail output remains on HELLO, i.e. still following the previous file bbb/file2 : HELLO HELLO HELLO HELLO Does anyone knows why this happens ? Is this still related to the previously mentioned bug or I am missing something else here ? (I'm on RHEL 7.2 // GNU coreutils 8.22)
tail -F on a symlink that gets updated Could someone please help me understand the below behavior ? Let's say there are 2 dirs each one with 1 file inside : aaa/file1 bbb/file2 And then there's a symlink pointing to file1, e.g. : current -> aaa/file1 From 2 separate sessions I push some data into the 2 files. In one I push random numbers and in the other some constant text : while true;do echo $RANDOM >> aaa/file1 ; sleep 8; done and while true; do echo HELLO >> bbb/file2 ; sleep 8; done I then run tail -F on the symlink using the ---disable-inotify as a workaround for this bug and I see the random values as expected : tail -F ---disable-inotify current 53 27169 30599 ... From another session I then update the symlink and make it point to the other file instead : ln -sf bbb/file2 current As expected, after that the tail output has switched to : tail: ‘current’ has been replaced; following end of new file HELLO HELLO HELLO ... But now after I switch back again the symlink to point to the 1st file : ln -sf aaa/file1 current The tail output remains on HELLO, i.e. still following the previous file bbb/file2 : HELLO HELLO HELLO HELLO Does anyone knows why this happens ? Is this still related to the previously mentioned bug or I am missing something else here ? (I'm on RHEL 7.2 // GNU coreutils 8.22)
shell, redhat, tail
3
984
1
https://stackoverflow.com/questions/58487893/tail-f-on-a-symlink-that-gets-updated
55,487,523
How to properly wait for the yum lock to be released?
I'm trying to write a cronjob that updates packages from a given yum repository on a regular basis by running the following command: yum -q -e 0 -d 0 -y update --disablerepo='*' --enablerepo='my-yum-repo' In order to prevent "yum lock warnings" like the following... Existing lock /var/run/yum.pid: another copy is running as pid 4902. Another app is currently holding the yum lock; waiting for it to exit... The other application is: yum Memory : 42 M RSS (325 MB VSZ) Started: Wed Apr 3 01:10:07 2019 - 00:01 ago State : Running, pid: 4902 ... I tried to enclose my code in a while loop to check for the existence of the yum.pid file as follow: */5 * * * * root while [ -f /var/run/yum.pid ]; do sleep 1; done && yum -q -e 0 -d 0 -y update --disablerepo='*' --enablerepo='my-yum-repo' Unfortunately, from time to time, the "yum lock warnings" still appear. I also tried it that way and the "yum lock warnings" still appear from time to time: while [ pgrep 'yum|rhn_check' ]; do sleep 1; done && yum -q -e 0 -d 0 -y update --disablerepo='*' --enablerepo='my-yum-repo' Do you have an idea how I could prevent them to occur? I would like to avoid redirecting stdout to /dev/null because I need to be informed if "real" problems occur during packages update. Thanks in advance for your help!
How to properly wait for the yum lock to be released? I'm trying to write a cronjob that updates packages from a given yum repository on a regular basis by running the following command: yum -q -e 0 -d 0 -y update --disablerepo='*' --enablerepo='my-yum-repo' In order to prevent "yum lock warnings" like the following... Existing lock /var/run/yum.pid: another copy is running as pid 4902. Another app is currently holding the yum lock; waiting for it to exit... The other application is: yum Memory : 42 M RSS (325 MB VSZ) Started: Wed Apr 3 01:10:07 2019 - 00:01 ago State : Running, pid: 4902 ... I tried to enclose my code in a while loop to check for the existence of the yum.pid file as follow: */5 * * * * root while [ -f /var/run/yum.pid ]; do sleep 1; done && yum -q -e 0 -d 0 -y update --disablerepo='*' --enablerepo='my-yum-repo' Unfortunately, from time to time, the "yum lock warnings" still appear. I also tried it that way and the "yum lock warnings" still appear from time to time: while [ pgrep 'yum|rhn_check' ]; do sleep 1; done && yum -q -e 0 -d 0 -y update --disablerepo='*' --enablerepo='my-yum-repo' Do you have an idea how I could prevent them to occur? I would like to avoid redirecting stdout to /dev/null because I need to be informed if "real" problems occur during packages update. Thanks in advance for your help!
linux, cron, redhat, yum
3
4,719
1
https://stackoverflow.com/questions/55487523/how-to-properly-wait-for-the-yum-lock-to-be-released
54,354,826
Problem sending POST request with big body size to Plumber API endpoint in R on Redhat 7.5
I am trying to send a table of about 140 rows and 5 columns as a JSON object (around 20 KB in size) from VBA using MSXML2.ServerXMLHTTP in a body of a POST request to an endpoint made available from R using plumber API package. The endpoint/function running in R on the server is throwing the following error: simpleError in fromJSON(requestList): argument "requestList" is missing, with no default requestList is the parameter passed to the endpoint function. It looks like it gets lost in the web call. If I reduce the table size to 30 rows instead of 140 rows, requestList is found and the request is served successfully. My platform is as follows: 1. Endpoints are written in R and exposed using Plumber API. 2. Endpoints are running on AWS instance with Redhat 7.5. 3. Timeout for the request is set to 100 minutes on VBA (client side).
Problem sending POST request with big body size to Plumber API endpoint in R on Redhat 7.5 I am trying to send a table of about 140 rows and 5 columns as a JSON object (around 20 KB in size) from VBA using MSXML2.ServerXMLHTTP in a body of a POST request to an endpoint made available from R using plumber API package. The endpoint/function running in R on the server is throwing the following error: simpleError in fromJSON(requestList): argument "requestList" is missing, with no default requestList is the parameter passed to the endpoint function. It looks like it gets lost in the web call. If I reduce the table size to 30 rows instead of 140 rows, requestList is found and the request is served successfully. My platform is as follows: 1. Endpoints are written in R and exposed using Plumber API. 2. Endpoints are running on AWS instance with Redhat 7.5. 3. Timeout for the request is set to 100 minutes on VBA (client side).
r, vba, post, redhat, plumber
3
972
2
https://stackoverflow.com/questions/54354826/problem-sending-post-request-with-big-body-size-to-plumber-api-endpoint-in-r-on
54,290,513
Unable to re-compile legacy Pro*C software in Redhat Linux with Oracle 12C
We have code in Pro*C and in a machine with Red Hat Enterprise Linux Server release 7.5 (Maipo) and Oracle 12 C we have run this without errors: proc SQLCHECK=SEMANTICS userid=letri/pruebas@desarrollo iname=carga_hr_fr include=. include=/usr/include include=/oracle/app/oracle/12.2.0/precomp/public include=/oracle/app/oracle/12.2.0/xdk/include include=/oracle/app/oracle/12.2.0/lib include=/oracle/app/oracle/12.2.0/lib include=/usr/lib/gcc/x86_64-redhat-linux/4.8.2/include/ cc -m64 -I. -I/usr/include -I/oracle/app/oracle/12.2.0/precomp/public -I/oracle/app/oracle/12.2.0/xdk/include -I/oracle/app/oracle/12.2.0/lib -I/oracle/app/oracle/12.2.0/lib -I/usr/lib/gcc/x86_64-redhat-linux/4.8.2/include/ -c carga_hr_fr.c But generating the executable with this command: cc -o carga_hr_fr carga_hr_fr.o /oracle/app/oracle/12.2.0/lib/libxml12.a -L/oracle/app/oracle/12.2.0/lib -L/oracle/app/oracle/12.2.0/xdk/include -L/usr/lib/gcc/x86_64-redhat-linux/4.8.5/include/ -lm -lclntsh it generates the error: /usr/bin/ld: /oracle/app/oracle/12.2.0/lib/libxml12.a(lpxsut.o): undefined reference to symbol 'lxgt2u' /oracle/app/oracle/12.2.0/lib/libclntshcore.so.12.1: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status Any ideas about how to solve it? This is the header of the code: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sqlda.h> #include <sqlcpr.h> #ifndef ORAXML_ORACLE #include <oraxml.h> #endif #define DEFAULT_KEYWORD "death" /*********** Conexion a Oracle *************/ #include "lib/liboracle.h" #define USERID "dummy/something@development" EXEC SQL INCLUDE sqlca; /*****************************************/
Unable to re-compile legacy Pro*C software in Redhat Linux with Oracle 12C We have code in Pro*C and in a machine with Red Hat Enterprise Linux Server release 7.5 (Maipo) and Oracle 12 C we have run this without errors: proc SQLCHECK=SEMANTICS userid=letri/pruebas@desarrollo iname=carga_hr_fr include=. include=/usr/include include=/oracle/app/oracle/12.2.0/precomp/public include=/oracle/app/oracle/12.2.0/xdk/include include=/oracle/app/oracle/12.2.0/lib include=/oracle/app/oracle/12.2.0/lib include=/usr/lib/gcc/x86_64-redhat-linux/4.8.2/include/ cc -m64 -I. -I/usr/include -I/oracle/app/oracle/12.2.0/precomp/public -I/oracle/app/oracle/12.2.0/xdk/include -I/oracle/app/oracle/12.2.0/lib -I/oracle/app/oracle/12.2.0/lib -I/usr/lib/gcc/x86_64-redhat-linux/4.8.2/include/ -c carga_hr_fr.c But generating the executable with this command: cc -o carga_hr_fr carga_hr_fr.o /oracle/app/oracle/12.2.0/lib/libxml12.a -L/oracle/app/oracle/12.2.0/lib -L/oracle/app/oracle/12.2.0/xdk/include -L/usr/lib/gcc/x86_64-redhat-linux/4.8.5/include/ -lm -lclntsh it generates the error: /usr/bin/ld: /oracle/app/oracle/12.2.0/lib/libxml12.a(lpxsut.o): undefined reference to symbol 'lxgt2u' /oracle/app/oracle/12.2.0/lib/libclntshcore.so.12.1: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status Any ideas about how to solve it? This is the header of the code: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sqlda.h> #include <sqlcpr.h> #ifndef ORAXML_ORACLE #include <oraxml.h> #endif #define DEFAULT_KEYWORD "death" /*********** Conexion a Oracle *************/ #include "lib/liboracle.h" #define USERID "dummy/something@development" EXEC SQL INCLUDE sqlca; /*****************************************/
c, oracle-database, redhat, oracle12c, proc
3
742
1
https://stackoverflow.com/questions/54290513/unable-to-re-compile-legacy-proc-software-in-redhat-linux-with-oracle-12c
53,606,555
How to push claims to keycloak?
I want to send few params from the spring boot application to the keycloak console for evaluating the policies. I want to send it from the application.properties.If possible,then how do i get it in the keycloak in policies for evaluation? Thank You
How to push claims to keycloak? I want to send few params from the spring boot application to the keycloak console for evaluating the policies. I want to send it from the application.properties.If possible,then how do i get it in the keycloak in policies for evaluation? Thank You
java, spring-boot, single-sign-on, redhat, keycloak
3
242
1
https://stackoverflow.com/questions/53606555/how-to-push-claims-to-keycloak
49,562,295
Installing RabbitMQ on Red Hat - wrong Erlang version
I'm trying to install RabbitMQ on an evaluation VM of Red Hat (Enterprise Linux 7 64-bit workstation version) following the instructions at [URL] . I've gone and installed the zero-dependency version of Erlang from the source at [URL] . That installed without error and I added its /bin directory to my path. When I then try to install RabbitMQ using yum install rabbitmq-server-3.7.4-1.el7.noarch.rpm , it fails and tells me it needs Erlang version >= 19.3, even though I installed the latest version of Erlang at the time (OTP v20.3) from the source. Below is the full output from when I try to install RabbitMQ: $ sudo yum install rabbitmq-server-3.7.4-1.el7.noarch.rpm Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- : manager Examining rabbitmq-server-3.7.4-1.el7.noarch.rpm: rabbitmq-server-3.7.4-1.el7.noarch Marking rabbitmq-server-3.7.4-1.el7.noarch.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package rabbitmq-server.noarch 0:3.7.4-1.el7 will be installed --> Processing Dependency: erlang >= 19.3 for package: rabbitmq-server-3.7.4-1.el7.noarch --> Processing Dependency: socat for package: rabbitmq-server-3.7.4-1.el7.noarch --> Running transaction check ---> Package rabbitmq-server.noarch 0:3.7.4-1.el7 will be installed --> Processing Dependency: erlang >= 19.3 for package: rabbitmq-server-3.7.4-1.el7.noarch ---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed --> Finished Dependency Resolution Error: Package: rabbitmq-server-3.7.4-1.el7.noarch (/rabbitmq-server-3.7.4-1.el7.noarch) Requires: erlang >= 19.3 ********************************************************************** yum can be configured to try to resolve such errors by temporarily enabling disabled repos and searching for missing dependencies. To enable this functionality please set 'notify_only=0' in /etc/yum/pluginconf.d/search-disabled-repos.conf ********************************************************************** Error: Package: rabbitmq-server-3.7.4-1.el7.noarch (/rabbitmq-server-3.7.4-1.el7.noarch) Requires: erlang >= 19.3 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest
Installing RabbitMQ on Red Hat - wrong Erlang version I'm trying to install RabbitMQ on an evaluation VM of Red Hat (Enterprise Linux 7 64-bit workstation version) following the instructions at [URL] . I've gone and installed the zero-dependency version of Erlang from the source at [URL] . That installed without error and I added its /bin directory to my path. When I then try to install RabbitMQ using yum install rabbitmq-server-3.7.4-1.el7.noarch.rpm , it fails and tells me it needs Erlang version >= 19.3, even though I installed the latest version of Erlang at the time (OTP v20.3) from the source. Below is the full output from when I try to install RabbitMQ: $ sudo yum install rabbitmq-server-3.7.4-1.el7.noarch.rpm Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- : manager Examining rabbitmq-server-3.7.4-1.el7.noarch.rpm: rabbitmq-server-3.7.4-1.el7.noarch Marking rabbitmq-server-3.7.4-1.el7.noarch.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package rabbitmq-server.noarch 0:3.7.4-1.el7 will be installed --> Processing Dependency: erlang >= 19.3 for package: rabbitmq-server-3.7.4-1.el7.noarch --> Processing Dependency: socat for package: rabbitmq-server-3.7.4-1.el7.noarch --> Running transaction check ---> Package rabbitmq-server.noarch 0:3.7.4-1.el7 will be installed --> Processing Dependency: erlang >= 19.3 for package: rabbitmq-server-3.7.4-1.el7.noarch ---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed --> Finished Dependency Resolution Error: Package: rabbitmq-server-3.7.4-1.el7.noarch (/rabbitmq-server-3.7.4-1.el7.noarch) Requires: erlang >= 19.3 ********************************************************************** yum can be configured to try to resolve such errors by temporarily enabling disabled repos and searching for missing dependencies. To enable this functionality please set 'notify_only=0' in /etc/yum/pluginconf.d/search-disabled-repos.conf ********************************************************************** Error: Package: rabbitmq-server-3.7.4-1.el7.noarch (/rabbitmq-server-3.7.4-1.el7.noarch) Requires: erlang >= 19.3 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest
erlang, rabbitmq, redhat
3
2,789
1
https://stackoverflow.com/questions/49562295/installing-rabbitmq-on-red-hat-wrong-erlang-version
49,188,003
Configure LVM resource on Redhat 7.4 cluster using pacemaker
I am configuring the Red Hat cluster with pacemaker and I wanted to add a LVM resource. I have installed following packages, OS: Red Hat 7.4 Packages Installed: lvm2-cluster, pacemaker, corosync, pcs, fence-agents-all but my LVM resource have a failed state with following error: [root@node1 ~]# pcs status Cluster name: jcluster WARNING: no stonith devices and stonith-enabled is not false Stack: corosync Current DC: node2 (version 1.1.16-12.el7_4.8-94ff4df) - partition with quorum Last updated: Sat Mar 10 11:54:41 2018 Last change: Sat Mar 10 11:17:13 2018 by hacluster via cibadmin on node1 2 nodes configured 3 resources configured (2 DISABLED) Online: [ node1 node2 ] Full list of resources: Clone Set: juris-clvmd-clone [juris-clvmd] Stopped (disabled): [ node1 node2 ] juris-lvm (ocf::heartbeat:LVM): FAILED node1 Failed Actions: * juris-lvm_monitor_0 on node1 'unknown error' (1): call=15, status=complete, exitreason='WARNING: jurisvg is active without the cluster tag, "pacemaker"', last-rc-change='Fri Mar 9 20:38:50 2018', queued=0ms, exec=255ms * juris-lvm_monitor_10000 on node1 'unknown error' (1): call=16, status=complete, exitreason='WARNING: jurisvg is active without the cluster tag, "pacemaker"', last-rc-change='Sat Mar 10 10:24:55 2018', queued=0ms, exec=0ms Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled I'm using the iscsi to shared the disk for my both node. After i present the shared disk to the nodes, I have then create a pvcreate, vgcreate, lvcreate for the new presented disk. After that I change the new vg and I created to the clustered attribute using following command. [root@node1 ~]# vgchange -cy jurisvg /dev/jurisvg/ha_lv: read failed after 0 of 4096 at 0: Input/output error /dev/jurisvg/ha_lv: read failed after 0 of 4096 at 53687025664: Input/output error /dev/jurisvg/ha_lv: read failed after 0 of 4096 at 53687083008: Input/output error /dev/jurisvg/ha_lv: read failed after 0 of 4096 at 4096: Input/output error LVM cluster daemon (clvmd) is not running. Make volume group "jurisvg" clustered anyway? [y/n]: y Volume group "jurisvg" successfully changed For configuring the LVM resource, do we need a clvmd service running? then for the pacemaker I can find the /usr/sbin/clvmd service but couldnt start it. [root@node1 ~]# /usr/sbin/clvmd clvmd could not connect to cluster manager Consult syslog for more information Is there anyone know why my LVM resource have that such an error and failed?
Configure LVM resource on Redhat 7.4 cluster using pacemaker I am configuring the Red Hat cluster with pacemaker and I wanted to add a LVM resource. I have installed following packages, OS: Red Hat 7.4 Packages Installed: lvm2-cluster, pacemaker, corosync, pcs, fence-agents-all but my LVM resource have a failed state with following error: [root@node1 ~]# pcs status Cluster name: jcluster WARNING: no stonith devices and stonith-enabled is not false Stack: corosync Current DC: node2 (version 1.1.16-12.el7_4.8-94ff4df) - partition with quorum Last updated: Sat Mar 10 11:54:41 2018 Last change: Sat Mar 10 11:17:13 2018 by hacluster via cibadmin on node1 2 nodes configured 3 resources configured (2 DISABLED) Online: [ node1 node2 ] Full list of resources: Clone Set: juris-clvmd-clone [juris-clvmd] Stopped (disabled): [ node1 node2 ] juris-lvm (ocf::heartbeat:LVM): FAILED node1 Failed Actions: * juris-lvm_monitor_0 on node1 'unknown error' (1): call=15, status=complete, exitreason='WARNING: jurisvg is active without the cluster tag, "pacemaker"', last-rc-change='Fri Mar 9 20:38:50 2018', queued=0ms, exec=255ms * juris-lvm_monitor_10000 on node1 'unknown error' (1): call=16, status=complete, exitreason='WARNING: jurisvg is active without the cluster tag, "pacemaker"', last-rc-change='Sat Mar 10 10:24:55 2018', queued=0ms, exec=0ms Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled I'm using the iscsi to shared the disk for my both node. After i present the shared disk to the nodes, I have then create a pvcreate, vgcreate, lvcreate for the new presented disk. After that I change the new vg and I created to the clustered attribute using following command. [root@node1 ~]# vgchange -cy jurisvg /dev/jurisvg/ha_lv: read failed after 0 of 4096 at 0: Input/output error /dev/jurisvg/ha_lv: read failed after 0 of 4096 at 53687025664: Input/output error /dev/jurisvg/ha_lv: read failed after 0 of 4096 at 53687083008: Input/output error /dev/jurisvg/ha_lv: read failed after 0 of 4096 at 4096: Input/output error LVM cluster daemon (clvmd) is not running. Make volume group "jurisvg" clustered anyway? [y/n]: y Volume group "jurisvg" successfully changed For configuring the LVM resource, do we need a clvmd service running? then for the pacemaker I can find the /usr/sbin/clvmd service but couldnt start it. [root@node1 ~]# /usr/sbin/clvmd clvmd could not connect to cluster manager Consult syslog for more information Is there anyone know why my LVM resource have that such an error and failed?
cluster-computing, redhat, lvm, pacemaker
3
9,664
1
https://stackoverflow.com/questions/49188003/configure-lvm-resource-on-redhat-7-4-cluster-using-pacemaker
47,353,443
RPM spec variable for (sub)package name?
RPM spec files have many special variables available within them. Do any of these variables expose the name of the current package being processed? For single-package RPMs, the answer would be obvious, but a single spec can also produce multiple RPMs. Is there a variable, such as $RPM_PACKAGE_NAME, that rpmbuild automatically updates to align with the current %files or %pre or %post section?
RPM spec variable for (sub)package name? RPM spec files have many special variables available within them. Do any of these variables expose the name of the current package being processed? For single-package RPMs, the answer would be obvious, but a single spec can also produce multiple RPMs. Is there a variable, such as $RPM_PACKAGE_NAME, that rpmbuild automatically updates to align with the current %files or %pre or %post section?
centos, redhat, rpm, rpmbuild
3
2,172
2
https://stackoverflow.com/questions/47353443/rpm-spec-variable-for-subpackage-name
46,430,352
Displaying a User Email using keycloak Admin Client not working(UnrecognizedPropertyException)
I'm trying to display a userEmail() or to display all the information about a user in our realm in general. I'm trying to invoke this method :- @Test public void displayUser(){ UsersResource users=kc.realm("SpringBoot").users(); UserResource user=users.get("b3479699-430b-4cd3-be96-d26db584d207"); //Succeeds, we have the user UserRepresentation ur=user.toRepresentation(); System.out.println(ur.getEmail()); } But it fails, giving me this error:-| javax.ws.rs.client.ResponseProcessingException: javax.ws.rs.ProcessingException: com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "notBefore" (class org.keycloak.representations.idm.UserRepresentation), not marked as ignorable (25 known properties: "disableableCredentialTypes", "enabled", "emailVerified", "origin", "self", "applicationRoles", "createdTimestamp", "clientRoles", "groups", "username", "totp", "id", "email", "federationLink", "serviceAccountClientId", "lastName", "clientConsents", "socialLinks", "realmRoles", "attributes", "firstName", "credentials", "requiredActions", "federatedIdentities", "access"]) at [Source: org.apache.http.conn.EofSensorInputStream@4ee203eb; line: 1, column: 248] (through reference chain: org.keycloak.representations.idm.UserRepresentation["notBefore"]) at com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:62) at com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:834) at com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1093) at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1489) at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1467) at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:282) at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:140) at com.fasterxml.jackson.databind.ObjectReader._bind(ObjectReader.java:1583) at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:964) at org.jboss.resteasy.plugins.providers.jackson.ResteasyJackson2Provider.readFrom(ResteasyJackson2Provider.java:127) at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.readFrom(AbstractReaderInterceptorContext.java:61) at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:53) at org.jboss.resteasy.plugins.interceptors.encoding.GZIPDecodingInterceptor.aroundReadFrom(GZIPDecodingInterceptor.java:59) at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:55) at org.jboss.resteasy.client.jaxrs.internal.ClientResponse.readFrom(ClientResponse.java:251) at org.jboss.resteasy.client.jaxrs.internal.ClientResponse.readEntity(ClientResponse.java:181) at org.jboss.resteasy.specimpl.BuiltResponse.readEntity(BuiltResponse.java:213) at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.extractResult(ClientInvocation.java:105) at org.jboss.resteasy.client.jaxrs.internal.proxy.extractors.BodyEntityExtractor.extractEntity(BodyEntityExtractor.java:60) at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientInvoker.invoke(ClientInvoker.java:104) at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientProxy.invoke(ClientProxy.java:76) at com.sun.proxy.$Proxy31.toRepresentation(Unknown Source) at com.example.keycloakaccess2.KeycloakAccess2ApplicationTests.displayUser(KeycloakAccess2ApplicationTests.java:134) I wonder if there is a proper way to display the information about the user or even to display the users we have in our realm ! I appreciate your help.
Displaying a User Email using keycloak Admin Client not working(UnrecognizedPropertyException) I'm trying to display a userEmail() or to display all the information about a user in our realm in general. I'm trying to invoke this method :- @Test public void displayUser(){ UsersResource users=kc.realm("SpringBoot").users(); UserResource user=users.get("b3479699-430b-4cd3-be96-d26db584d207"); //Succeeds, we have the user UserRepresentation ur=user.toRepresentation(); System.out.println(ur.getEmail()); } But it fails, giving me this error:-| javax.ws.rs.client.ResponseProcessingException: javax.ws.rs.ProcessingException: com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "notBefore" (class org.keycloak.representations.idm.UserRepresentation), not marked as ignorable (25 known properties: "disableableCredentialTypes", "enabled", "emailVerified", "origin", "self", "applicationRoles", "createdTimestamp", "clientRoles", "groups", "username", "totp", "id", "email", "federationLink", "serviceAccountClientId", "lastName", "clientConsents", "socialLinks", "realmRoles", "attributes", "firstName", "credentials", "requiredActions", "federatedIdentities", "access"]) at [Source: org.apache.http.conn.EofSensorInputStream@4ee203eb; line: 1, column: 248] (through reference chain: org.keycloak.representations.idm.UserRepresentation["notBefore"]) at com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:62) at com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:834) at com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1093) at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1489) at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1467) at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:282) at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:140) at com.fasterxml.jackson.databind.ObjectReader._bind(ObjectReader.java:1583) at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:964) at org.jboss.resteasy.plugins.providers.jackson.ResteasyJackson2Provider.readFrom(ResteasyJackson2Provider.java:127) at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.readFrom(AbstractReaderInterceptorContext.java:61) at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:53) at org.jboss.resteasy.plugins.interceptors.encoding.GZIPDecodingInterceptor.aroundReadFrom(GZIPDecodingInterceptor.java:59) at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:55) at org.jboss.resteasy.client.jaxrs.internal.ClientResponse.readFrom(ClientResponse.java:251) at org.jboss.resteasy.client.jaxrs.internal.ClientResponse.readEntity(ClientResponse.java:181) at org.jboss.resteasy.specimpl.BuiltResponse.readEntity(BuiltResponse.java:213) at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.extractResult(ClientInvocation.java:105) at org.jboss.resteasy.client.jaxrs.internal.proxy.extractors.BodyEntityExtractor.extractEntity(BodyEntityExtractor.java:60) at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientInvoker.invoke(ClientInvoker.java:104) at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientProxy.invoke(ClientProxy.java:76) at com.sun.proxy.$Proxy31.toRepresentation(Unknown Source) at com.example.keycloakaccess2.KeycloakAccess2ApplicationTests.displayUser(KeycloakAccess2ApplicationTests.java:134) I wonder if there is a proper way to display the information about the user or even to display the users we have in our realm ! I appreciate your help.
java, spring-boot, single-sign-on, redhat, keycloak
3
2,020
1
https://stackoverflow.com/questions/46430352/displaying-a-user-email-using-keycloak-admin-client-not-workingunrecognizedprop
37,985,542
How to install R on RedHat
Stuck at configuration trying to install R from source on RedHat. Here is the output: checking libcurl version ... 7.49.1 checking curl/curl.h usability... yes checking curl/curl.h presence... yes checking for curl/curl.h... yes checking if libcurl is version 7 and >= 7.28.0... yes checking if libcurl supports https... no configure: error: libcurl >= 7.28.0 library and headers are required with support for https I try yum install libcurl4-openssl-dev to solve the error,but the system says no such packages are available.
How to install R on RedHat Stuck at configuration trying to install R from source on RedHat. Here is the output: checking libcurl version ... 7.49.1 checking curl/curl.h usability... yes checking curl/curl.h presence... yes checking for curl/curl.h... yes checking if libcurl is version 7 and >= 7.28.0... yes checking if libcurl supports https... no configure: error: libcurl >= 7.28.0 library and headers are required with support for https I try yum install libcurl4-openssl-dev to solve the error,but the system says no such packages are available.
r, redhat
3
1,245
1
https://stackoverflow.com/questions/37985542/how-to-install-r-on-redhat
35,582,138
Compile a static version of pngquant
I'm trying to create a statically linked version of pngquant in Oracle Linux Server release 7.1. I've compiled the static version of zlib and the static version of libpng. Then, when I configure pngquant, I always get the information that it will be linked with a shared version of zlib. $ ./configure --with-libpng=../libpng-1.6.21 --extra-cflags="-I../zlib-1.2.8" --extra-ldflags="../zlib-1.2.8/libz.a" Compiler: gcc Debug: no SSE: yes OpenMP: no libpng: static (1.6.21) zlib: shared (1.2.7) lcms2: no If I execute make, in the output it seems that the options are correctly passed to the compiler. However, the resulting binary requires libz.so to be executed. It seems that my directives are ignored or that the installed version always takes precedence. Is there any way of forcing pngquant to be compiled with the static version of zlib?
Compile a static version of pngquant I'm trying to create a statically linked version of pngquant in Oracle Linux Server release 7.1. I've compiled the static version of zlib and the static version of libpng. Then, when I configure pngquant, I always get the information that it will be linked with a shared version of zlib. $ ./configure --with-libpng=../libpng-1.6.21 --extra-cflags="-I../zlib-1.2.8" --extra-ldflags="../zlib-1.2.8/libz.a" Compiler: gcc Debug: no SSE: yes OpenMP: no libpng: static (1.6.21) zlib: shared (1.2.7) lcms2: no If I execute make, in the output it seems that the options are correctly passed to the compiler. However, the resulting binary requires libz.so to be executed. It seems that my directives are ignored or that the installed version always takes precedence. Is there any way of forcing pngquant to be compiled with the static version of zlib?
redhat, zlib, libpng, pngquant
3
634
2
https://stackoverflow.com/questions/35582138/compile-a-static-version-of-pngquant
34,089,504
Upload file using PHP on OpenShift
I'm trying to upload an image using the following code: index.php: <form action="upload_photo.php" method="post" enctype="multipart/form-data"> <p> Upload a new photo to the server:<br/><br/><br/> <input type="file" name="myphoto"/><br/><br/> <input type="submit" value="Upload photo"/> </p> </form> upload_photo.php: // This function is included from another .php file function checkUploadedPhoto() { $target_dir = "uploads/"; $target_file = $target_dir . basename($_FILES["myphoto"]["name"]); if(isset($_FILES['myphoto']) AND $_FILES['myphoto']['error'] == 0) { // Check size if($_FILES['myphoto']['size'] <= 1000000) { // Get extension name $fileInfo = pathinfo($_FILES['myphoto']['name']); $upload_extension = $fileInfo['extension']; $allowed_extensions = array('jpg', 'jpeg', 'gif', 'png'); // Check if the file already exists if (file_exists($target_file)) { echo "Sorry, file already exists."; } // Check if the file has a correct, expected extension if(in_array($upload_extension, $allowed_extensions)) { if(move_uploaded_file($_FILES['myphoto']['tmp_name'], $target_file)) { return true; } } else echo "error3"; } else echo "error2"; } else echo "error1"; echo "<pre>". print_r($_FILES) ."</pre>"; echo "Error code: " .$_FILES['myphoto']['error'] ."<br/>"; return false; } if(checkUploadedPhoto()) { header("Location: index.php"); } else { echo "upload error"; } Browser result: Even though the error code is 0 , the upload fails . Is it a permission issue ? I can't seem to tell where is the issue from. Other references I checked, didn't help though: link1 / link2 / link3 / W3Schools PHP upload EDIT: This is my app's structure: UPDATE: I added this test to check if the uploads/ directory is writeable , & it turned out that it's inaccessible : // This condition is true if (!is_writeable('uploads/' . $_FILES['myphoto']['name'])) { die("Cannot write to destination file"); } UPDATE2: I changed the target directory from "uploads/" to "/uploads/" & it worked on localhost , but not on the hosted server.
Upload file using PHP on OpenShift I'm trying to upload an image using the following code: index.php: <form action="upload_photo.php" method="post" enctype="multipart/form-data"> <p> Upload a new photo to the server:<br/><br/><br/> <input type="file" name="myphoto"/><br/><br/> <input type="submit" value="Upload photo"/> </p> </form> upload_photo.php: // This function is included from another .php file function checkUploadedPhoto() { $target_dir = "uploads/"; $target_file = $target_dir . basename($_FILES["myphoto"]["name"]); if(isset($_FILES['myphoto']) AND $_FILES['myphoto']['error'] == 0) { // Check size if($_FILES['myphoto']['size'] <= 1000000) { // Get extension name $fileInfo = pathinfo($_FILES['myphoto']['name']); $upload_extension = $fileInfo['extension']; $allowed_extensions = array('jpg', 'jpeg', 'gif', 'png'); // Check if the file already exists if (file_exists($target_file)) { echo "Sorry, file already exists."; } // Check if the file has a correct, expected extension if(in_array($upload_extension, $allowed_extensions)) { if(move_uploaded_file($_FILES['myphoto']['tmp_name'], $target_file)) { return true; } } else echo "error3"; } else echo "error2"; } else echo "error1"; echo "<pre>". print_r($_FILES) ."</pre>"; echo "Error code: " .$_FILES['myphoto']['error'] ."<br/>"; return false; } if(checkUploadedPhoto()) { header("Location: index.php"); } else { echo "upload error"; } Browser result: Even though the error code is 0 , the upload fails . Is it a permission issue ? I can't seem to tell where is the issue from. Other references I checked, didn't help though: link1 / link2 / link3 / W3Schools PHP upload EDIT: This is my app's structure: UPDATE: I added this test to check if the uploads/ directory is writeable , & it turned out that it's inaccessible : // This condition is true if (!is_writeable('uploads/' . $_FILES['myphoto']['name'])) { die("Cannot write to destination file"); } UPDATE2: I changed the target directory from "uploads/" to "/uploads/" & it worked on localhost , but not on the hosted server.
php, file-upload, openshift, redhat
3
862
1
https://stackoverflow.com/questions/34089504/upload-file-using-php-on-openshift
33,868,176
KeepAlived + HAProxy gets connection refused after a while
I´ve the next scenario, 4 VM´s running Red Hat Enterprise Linux 7: 20.1.67.230 server (VIRTUAL IP) (not a host) 20.1.67.219 haproxy1 (LOAD BALANCER) 20.1.67.229 haproxy2 (LOAD BALANCER) 20.1.67.223 server1 (LOAD TO BALANCE) 20.1.67.213 server2 (LOAD TO BALANCE) My keepalived.conf file is: vrrp_script chk_haproxy { script "killall -0 haproxy" # check the haproxy process interval 2 # every 2 seconds weight 2 # add 2 points if OK } vrrp_instance VI_1 { interface enp0s3 # interface to monitor state MASTER# MASTER on haproxy1, BACKUP on haproxy2 virtual_router_id 51 priority 101 # 101 on haproxy1, 100 on haproxy2 unicast_src_ip 20.1.67.229 # This is the IP of the interface keepalived listens on unicast_peer { # This is the IP of the peer instance 20.1.67.219 } virtual_ipaddress { 20.1.67.230 # virtual ip address } track_script { chk_haproxy } } When a execute a request to the VIRTUAL IP, for instance: curl server:8888/info everything is ok, but just for a while, after some requests the command returns me : connection refused So I´ve to restart the keepalived service manually , this way: systemctl restart keepalived.service The whole system seems work well, VRRP messages between haproxy1 and haproxy2 are OK, it´s just like the Virtual IP is not working properly. Can anyone point me in the right direction to diagnose and fix this problem?
KeepAlived + HAProxy gets connection refused after a while I´ve the next scenario, 4 VM´s running Red Hat Enterprise Linux 7: 20.1.67.230 server (VIRTUAL IP) (not a host) 20.1.67.219 haproxy1 (LOAD BALANCER) 20.1.67.229 haproxy2 (LOAD BALANCER) 20.1.67.223 server1 (LOAD TO BALANCE) 20.1.67.213 server2 (LOAD TO BALANCE) My keepalived.conf file is: vrrp_script chk_haproxy { script "killall -0 haproxy" # check the haproxy process interval 2 # every 2 seconds weight 2 # add 2 points if OK } vrrp_instance VI_1 { interface enp0s3 # interface to monitor state MASTER# MASTER on haproxy1, BACKUP on haproxy2 virtual_router_id 51 priority 101 # 101 on haproxy1, 100 on haproxy2 unicast_src_ip 20.1.67.229 # This is the IP of the interface keepalived listens on unicast_peer { # This is the IP of the peer instance 20.1.67.219 } virtual_ipaddress { 20.1.67.230 # virtual ip address } track_script { chk_haproxy } } When a execute a request to the VIRTUAL IP, for instance: curl server:8888/info everything is ok, but just for a while, after some requests the command returns me : connection refused So I´ve to restart the keepalived service manually , this way: systemctl restart keepalived.service The whole system seems work well, VRRP messages between haproxy1 and haproxy2 are OK, it´s just like the Virtual IP is not working properly. Can anyone point me in the right direction to diagnose and fix this problem?
virtualbox, redhat, haproxy, high-availability, virtual-ip-address
3
2,961
1
https://stackoverflow.com/questions/33868176/keepalived-haproxy-gets-connection-refused-after-a-while
29,399,081
installing LuaJIT on redhat ppc64
I would like to install LuaJIT on my redhat system...in order to get OSRM working. I have tried to do so by following the instructions here and in particular i was following this part: cd /tmp wget [URL] tar -zxvf LuaJIT-2.0.2.tar.gz cd LuaJIT-2.0.2 make install PREFIX=/opt/osrm_infrastructure/LuaJIT-2.0.2 however i get the following error: ==== Building LuaJIT 2.0.2 ==== make -C src lj_arch.h:324:2: error: #error "No support for PowerPC 64 bit mode" #error "No support for PowerPC 64 bit mode" ^ I am on a redhat 7 ppc64 architecture... Is there a work around that might be available?
installing LuaJIT on redhat ppc64 I would like to install LuaJIT on my redhat system...in order to get OSRM working. I have tried to do so by following the instructions here and in particular i was following this part: cd /tmp wget [URL] tar -zxvf LuaJIT-2.0.2.tar.gz cd LuaJIT-2.0.2 make install PREFIX=/opt/osrm_infrastructure/LuaJIT-2.0.2 however i get the following error: ==== Building LuaJIT 2.0.2 ==== make -C src lj_arch.h:324:2: error: #error "No support for PowerPC 64 bit mode" #error "No support for PowerPC 64 bit mode" ^ I am on a redhat 7 ppc64 architecture... Is there a work around that might be available?
lua, redhat, luajit, powerpc
3
798
1
https://stackoverflow.com/questions/29399081/installing-luajit-on-redhat-ppc64
28,823,639
How to automatically input ssh private key passphrase with pexpect
I have previously written a program for a linux env which automatically runs the SSHFS binary as a user and inputs a stored ssh private key passphrase. (the public half is already on the remote server) I had this working with simple pexpect commands on one server. (Ubuntu server 14.04, ssh version 6.6, sshfs version 2.5) But this single piece of the program is proving to be an issue when the application has been moved to a redhat machine (RHEL6.5, ssh version 5.3, sshfs version 2.4) This simple step has been driving me crazy all day so now I turn to this community for support. My original code (simplified) looked like this: proc = pexpect.spawn('sshfs %s@%s:%s...') #many options, unrelated proc.expect([pexpect.EOF, 'Enter passphrase for key.*', pexpect.TIMEOUT], timeout=30) if proc.match_index == 1: proc.sendline('thepassphrase') Which runs as expected on ubuntu but not rhel. I have also tried the fallback method of piping to subprocess without much success either. proc = subprocess.Popen('sshfs %s@%s:%s...', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) proc.stdin.write('thepassphrase'+'\n') proc.stdin.flush() Of course I have tried many slight variations of this without success, and of course the command runs fine when I run it manually. Update 3/3 : I have also today manually compiled and installed ssh 6.6 in rhel to see if that was causing the issue, but the issue persists even with the new ssh binary. Update 3/9 : Today I have found one particular solution which works, but I am not happy with the fact that many other different solutions did not work, and I am still looking for the answer as to why. Here is the best I could do so far: proc = subprocess.check_call("sudo -H -u %s ssh-keygen -p -P %s -N '' -f %s" % (user, userKey['passphrase'], userKey['path']), shell=True) time.sleep(2) proc = subprocess.Popen(cmd, shell=True) proc.communicate() time.sleep(1) proc = subprocess.check_call("sudo -H -u %s ssh-keygen -p -P '' -N %s -f %s" % (user, userKey['passphrase'], userKey['path']), shell=True) Removes the passphrase from the key, mounts the drive, and then re-adds the key. Obviously I don't like this solution, but it will have to do until I can get to the bottom of this. Update 3/23 : Well due to my stupidity I did not see the immediate problem with this method until now and now I am back to the drawing board. While this workaround does work for the first time the connection is made, the -o reconnect obviously fails because sshfs does not know the passphrase to reconnect. This means that this solution is no longer viable, and I would really if anyone knows how to get the pexpect version working.
How to automatically input ssh private key passphrase with pexpect I have previously written a program for a linux env which automatically runs the SSHFS binary as a user and inputs a stored ssh private key passphrase. (the public half is already on the remote server) I had this working with simple pexpect commands on one server. (Ubuntu server 14.04, ssh version 6.6, sshfs version 2.5) But this single piece of the program is proving to be an issue when the application has been moved to a redhat machine (RHEL6.5, ssh version 5.3, sshfs version 2.4) This simple step has been driving me crazy all day so now I turn to this community for support. My original code (simplified) looked like this: proc = pexpect.spawn('sshfs %s@%s:%s...') #many options, unrelated proc.expect([pexpect.EOF, 'Enter passphrase for key.*', pexpect.TIMEOUT], timeout=30) if proc.match_index == 1: proc.sendline('thepassphrase') Which runs as expected on ubuntu but not rhel. I have also tried the fallback method of piping to subprocess without much success either. proc = subprocess.Popen('sshfs %s@%s:%s...', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) proc.stdin.write('thepassphrase'+'\n') proc.stdin.flush() Of course I have tried many slight variations of this without success, and of course the command runs fine when I run it manually. Update 3/3 : I have also today manually compiled and installed ssh 6.6 in rhel to see if that was causing the issue, but the issue persists even with the new ssh binary. Update 3/9 : Today I have found one particular solution which works, but I am not happy with the fact that many other different solutions did not work, and I am still looking for the answer as to why. Here is the best I could do so far: proc = subprocess.check_call("sudo -H -u %s ssh-keygen -p -P %s -N '' -f %s" % (user, userKey['passphrase'], userKey['path']), shell=True) time.sleep(2) proc = subprocess.Popen(cmd, shell=True) proc.communicate() time.sleep(1) proc = subprocess.check_call("sudo -H -u %s ssh-keygen -p -P '' -N %s -f %s" % (user, userKey['passphrase'], userKey['path']), shell=True) Removes the passphrase from the key, mounts the drive, and then re-adds the key. Obviously I don't like this solution, but it will have to do until I can get to the bottom of this. Update 3/23 : Well due to my stupidity I did not see the immediate problem with this method until now and now I am back to the drawing board. While this workaround does work for the first time the connection is made, the -o reconnect obviously fails because sshfs does not know the passphrase to reconnect. This means that this solution is no longer viable, and I would really if anyone knows how to get the pexpect version working.
python, linux, ssh, redhat, sshfs
3
4,004
1
https://stackoverflow.com/questions/28823639/how-to-automatically-input-ssh-private-key-passphrase-with-pexpect
21,877,092
Five second wait in activemq cms MessageProducer.send with zookeeper
I'm testing ActiveMQ 5.9.0 with Replicated LevelDB . Running against a standalone ActiveMQ with a local LevelDB store, each producer.send(message) call takes about 1 ms. With my replicated setup with 3 zookeepers and 3 activemq brokers, producer.send(message) takes slightly more than 5 seconds to return! This happens even with sync="local_mem" in <replicatedLevelDB ... > . It's always just above 5 seconds, so there seems to be some strange wait/timeout involved. Does this ring a bell? It doesn't matter if I set brokerurl to failover:(<all three brokers>) or just tcp://brokerX , where brokerX is in the replicated LevelDB setup. There is no noticable delay sending messages in the brokerX web ui (hawtio). If I change to tcp://brokerY , where broker is an otherwise identical broker with <persistenceAdapter ...> set to <levelDB...> instead of <replicatedLevelDB...> , we're down at 1 ms per send. Changing zookeeper tickTime etc makes no difference. Debug log below. As you see, 5 seconds between "sent to queue", but zookeeper ping is quick. 2014-02-19 10:45:34,719 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-227 2014-02-19 10:45:34,724 | DEBUG | localhost Message ID:<hostname>-57776-1392803129562-0:0:1:1:2 sent to queue://IO_stab_test_Q | org.apache.activemq.broker.region.Queue | ActiveMQ Transport: tcp:///<ip address>:54727@61616 2014-02-19 10:45:34,725 | DEBUG | IO_stab_test_Q toPageIn: 1, Inflight: 1, pagedInMessages.size 1, enqueueCount: 27, dequeueCount: 25 | org.apache.activemq.broker.region.Queue | ActiveMQ BrokerService[localhost] Task-20 2014-02-19 10:45:34,731 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-222 2014-02-19 10:45:34,735 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:34,867 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-222 2014-02-19 10:45:35,403 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:35,634 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-227 2014-02-19 10:45:36,071 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:36,740 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:37,410 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:38,088 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 8ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:38,623 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-222 2014-02-19 10:45:38,750 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:39,420 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:39,735 | DEBUG | localhost Message ID:<hostname>-57776-1392803129562-0:0:1:1:3 sent to queue://IO_stab_test_Q | org.apache.activemq.broker.region.Queue | ActiveMQ Transport: tcp:///<ip address>:54727@61616 2014-02-19 10:45:39,737 | DEBUG | IO_stab_test_Q toPageIn: 1, Inflight: 2, pagedInMessages.size 2, enqueueCount: 28, dequeueCount: 25 | org.apache.activemq.broker.region.Queue | ActiveMQ BrokerService[localhost] Task-24 2014-02-19 10:45:40,090 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
Five second wait in activemq cms MessageProducer.send with zookeeper I'm testing ActiveMQ 5.9.0 with Replicated LevelDB . Running against a standalone ActiveMQ with a local LevelDB store, each producer.send(message) call takes about 1 ms. With my replicated setup with 3 zookeepers and 3 activemq brokers, producer.send(message) takes slightly more than 5 seconds to return! This happens even with sync="local_mem" in <replicatedLevelDB ... > . It's always just above 5 seconds, so there seems to be some strange wait/timeout involved. Does this ring a bell? It doesn't matter if I set brokerurl to failover:(<all three brokers>) or just tcp://brokerX , where brokerX is in the replicated LevelDB setup. There is no noticable delay sending messages in the brokerX web ui (hawtio). If I change to tcp://brokerY , where broker is an otherwise identical broker with <persistenceAdapter ...> set to <levelDB...> instead of <replicatedLevelDB...> , we're down at 1 ms per send. Changing zookeeper tickTime etc makes no difference. Debug log below. As you see, 5 seconds between "sent to queue", but zookeeper ping is quick. 2014-02-19 10:45:34,719 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-227 2014-02-19 10:45:34,724 | DEBUG | localhost Message ID:<hostname>-57776-1392803129562-0:0:1:1:2 sent to queue://IO_stab_test_Q | org.apache.activemq.broker.region.Queue | ActiveMQ Transport: tcp:///<ip address>:54727@61616 2014-02-19 10:45:34,725 | DEBUG | IO_stab_test_Q toPageIn: 1, Inflight: 1, pagedInMessages.size 1, enqueueCount: 27, dequeueCount: 25 | org.apache.activemq.broker.region.Queue | ActiveMQ BrokerService[localhost] Task-20 2014-02-19 10:45:34,731 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-222 2014-02-19 10:45:34,735 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:34,867 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-222 2014-02-19 10:45:35,403 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:35,634 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-227 2014-02-19 10:45:36,071 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:36,740 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:37,410 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:38,088 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 8ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:38,623 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-222 2014-02-19 10:45:38,750 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:39,420 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181) 2014-02-19 10:45:39,735 | DEBUG | localhost Message ID:<hostname>-57776-1392803129562-0:0:1:1:3 sent to queue://IO_stab_test_Q | org.apache.activemq.broker.region.Queue | ActiveMQ Transport: tcp:///<ip address>:54727@61616 2014-02-19 10:45:39,737 | DEBUG | IO_stab_test_Q toPageIn: 1, Inflight: 2, pagedInMessages.size 2, enqueueCount: 28, dequeueCount: 25 | org.apache.activemq.broker.region.Queue | ActiveMQ BrokerService[localhost] Task-24 2014-02-19 10:45:40,090 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
activemq-classic, redhat, apache-zookeeper
3
562
1
https://stackoverflow.com/questions/21877092/five-second-wait-in-activemq-cms-messageproducer-send-with-zookeeper
20,812,368
mailx -r does not send email if sender field uses real domain
I cannot get the mailx -r option to set the sender "From" field correctly. echo "email text" | mail -s "test 123" -r donotreply@domain.com user@domain.com The email gets sent if the "-r" field is a fake domain. If it is a real domain, the email does not get sent. The username does not matter, only the domain name. Where should I check to fix this? RHEL6.4
mailx -r does not send email if sender field uses real domain I cannot get the mailx -r option to set the sender "From" field correctly. echo "email text" | mail -s "test 123" -r donotreply@domain.com user@domain.com The email gets sent if the "-r" field is a fake domain. If it is a real domain, the email does not get sent. The username does not matter, only the domain name. Where should I check to fix this? RHEL6.4
redhat, sender, mailx
3
1,777
1
https://stackoverflow.com/questions/20812368/mailx-r-does-not-send-email-if-sender-field-uses-real-domain
20,110,033
Run script as daemon with proc::Daemon module
I've got a perl script I'm writing for a school assignment that needs to be run as a daemon and do certain actions when it recives signals. I read this thread How can I run a Perl script as a system daemon in linux? and tried doing what the top reply suggested but if I run my program I don't see a PID for it. Here's the basics of my current code. #!/usr/bin/perl use strict; use warnings; use Proc::Daemon; Proc::Daemon::Init; my $fname = "/tmp/filename.txt"; my $datafile; my @students; sub filefind {finds a filename } sub readData {reads text in file } sub createhash { makes hash out of data } sub printa {prints sorted data } sub alpha { sorts data } sub revalpha { sorts data } filefind(); readData(); $SIG{ USR1 } = \&alph; $SIG{ USR2 } = \&revalph;
Run script as daemon with proc::Daemon module I've got a perl script I'm writing for a school assignment that needs to be run as a daemon and do certain actions when it recives signals. I read this thread How can I run a Perl script as a system daemon in linux? and tried doing what the top reply suggested but if I run my program I don't see a PID for it. Here's the basics of my current code. #!/usr/bin/perl use strict; use warnings; use Proc::Daemon; Proc::Daemon::Init; my $fname = "/tmp/filename.txt"; my $datafile; my @students; sub filefind {finds a filename } sub readData {reads text in file } sub createhash { makes hash out of data } sub printa {prints sorted data } sub alpha { sorts data } sub revalpha { sorts data } filefind(); readData(); $SIG{ USR1 } = \&alph; $SIG{ USR2 } = \&revalph;
perl, daemon, redhat
3
928
1
https://stackoverflow.com/questions/20110033/run-script-as-daemon-with-procdaemon-module
9,300,522
Java runs really slow on CentOS minimal install, but fast on normal install
Using CentOS 6.2, both of these installations are on the same server: After doing a 'minimal' install Java programs run incredibly slow. After doing a 'software development workstation' install Java programs run at normal speed. Some information gathered so far: Enabling services not present in the minimal install, e.g., irqbalance , cpuspeed has not helped Have done benchmarks using Phoronix suite to test CPU/RAM/HD speed. These tests all run fine on both installs. Have done benchmarks using DaCapo suite (which is in Java). These tests all run terribly (that is, 5-50 times slower) on the minimal install. Have tried multiple versions of JRE: OpenJDK 6, Sun Java 6, Sun Java 7 Have updated to the latest packages with yum Have verified this slowdown multiple times on two different servers. Both servers use Xeon dual core processors, and have 16GB of RAM or more Anyone have any idea what could cause this?
Java runs really slow on CentOS minimal install, but fast on normal install Using CentOS 6.2, both of these installations are on the same server: After doing a 'minimal' install Java programs run incredibly slow. After doing a 'software development workstation' install Java programs run at normal speed. Some information gathered so far: Enabling services not present in the minimal install, e.g., irqbalance , cpuspeed has not helped Have done benchmarks using Phoronix suite to test CPU/RAM/HD speed. These tests all run fine on both installs. Have done benchmarks using DaCapo suite (which is in Java). These tests all run terribly (that is, 5-50 times slower) on the minimal install. Have tried multiple versions of JRE: OpenJDK 6, Sun Java 6, Sun Java 7 Have updated to the latest packages with yum Have verified this slowdown multiple times on two different servers. Both servers use Xeon dual core processors, and have 16GB of RAM or more Anyone have any idea what could cause this?
java, linux, centos, redhat, dacapo
3
3,161
1
https://stackoverflow.com/questions/9300522/java-runs-really-slow-on-centos-minimal-install-but-fast-on-normal-install
2,995,987
migrating Solaris to RH: network latency issue, tcp window size &amp; other tcp parameters
I have a client/server app (Java) that I'm migrating from Solaris to RH Linux. since I started running it in RH, I noticed some issues related to latency. I managed to isolate the problem that looks like this: client sends 5 messages (32 bytes each) in a row (same application timestamp) to the server. server echos messages. client receives replies and prints round trip time for each msg. in Solaris, all is well: I get ALL 5 replies at the same time, roughly 80ms after having sent original messages (client & server are several thousands miles away from each other: my ping RTT is 80ms, all normal). in RH, first 3 messages are echoed normally (they arrive 80ms after they've been sent), however the following 2 arrive 80ms later (so total 160ms RTT). the pattern is always the same. clearly looked like a TCP problem. on my solaris box, I had previously configured the tcp stack with 2 specific options: disable nagle algorithm globally set tcp_deferred_acks_max to 0 on RH, it's not possible to disable nagle globally, but I disabled it on all of my apps' sockets (TCP_NODELAY). so I started playing with tcpdump (on the server machine), and compared both outputs: SOLARIS : 22 2.085645 client server TCP 56150 > 6006 [PSH, ACK] Seq=111 Ack=106 Win=66672 Len=22 "MSG_1 RCV" 23 2.085680 server client TCP 6006 > 56150 [ACK] Seq=106 Ack=133 Win=50400 Len=0 24 2.085908 client server TCP 56150 > 6006 [PSH, ACK] Seq=133 Ack=106 Win=66672 Len=22 "MSG_2 RCV" 25 2.085925 server client TCP 6006 > 56150 [ACK] Seq=106 Ack=155 Win=50400 Len=0 26 2.086175 client server TCP 56150 > 6006 [PSH, ACK] Seq=155 Ack=106 Win=66672 Len=22 "MSG_3 RCV" 27 2.086192 server client TCP 6006 > 56150 [ACK] Seq=106 Ack=177 Win=50400 Len=0 28 2.086243 server client TCP 6006 > 56150 [PSH, ACK] Seq=106 Ack=177 Win=50400 Len=21 "MSG_1 ECHO" 29 2.086440 client server TCP 56150 > 6006 [PSH, ACK] Seq=177 Ack=106 Win=66672 Len=22 "MSG_4 RCV" 30 2.086454 server client TCP 6006 > 56150 [ACK] Seq=127 Ack=199 Win=50400 Len=0 31 2.086659 server client TCP 6006 > 56150 [PSH, ACK] Seq=127 Ack=199 Win=50400 Len=21 "MSG_2 ECHO" 32 2.086708 client server TCP 56150 > 6006 [PSH, ACK] Seq=199 Ack=106 Win=66672 Len=22 "MSG_5 RCV" 33 2.086721 server client TCP 6006 > 56150 [ACK] Seq=148 Ack=221 Win=50400 Len=0 34 2.086947 server client TCP 6006 > 56150 [PSH, ACK] Seq=148 Ack=221 Win=50400 Len=21 "MSG_3 ECHO" 35 2.087196 server client TCP 6006 > 56150 [PSH, ACK] Seq=169 Ack=221 Win=50400 Len=21 "MSG_4 ECHO" 36 2.087500 server client TCP 6006 > 56150 [PSH, ACK] Seq=190 Ack=221 Win=50400 Len=21 "MSG_5 ECHO" 37 2.165390 client server TCP 56150 > 6006 [ACK] Seq=221 Ack=148 Win=66632 Len=0 38 2.166314 client server TCP 56150 > 6006 [ACK] Seq=221 Ack=190 Win=66588 Len=0 39 2.364135 client server TCP 56150 > 6006 [ACK] Seq=221 Ack=211 Win=66568 Len=0 REDHAT : 17 2.081163 client server TCP 55879 > 6006 [PSH, ACK] Seq=111 Ack=106 Win=66672 Len=22 "MSG_1 RCV" 18 2.081178 server client TCP 6006 > 55879 [ACK] Seq=106 Ack=133 Win=5888 Len=0 19 2.081297 server client TCP 6006 > 55879 [PSH, ACK] Seq=106 Ack=133 Win=5888 Len=21 "MSG_1 ECHO" 20 2.081711 client server TCP 55879 > 6006 [PSH, ACK] Seq=133 Ack=106 Win=66672 Len=22 "MSG_2 RCV" 21 2.081761 client server TCP 55879 > 6006 [PSH, ACK] Seq=155 Ack=106 Win=66672 Len=22 "MSG_3 RCV" 22 2.081846 server client TCP 6006 > 55879 [PSH, ACK] Seq=127 Ack=177 Win=5888 Len=21 "MSG_2 ECHO" 23 2.081995 server client TCP 6006 > 55879 [PSH, ACK] Seq=148 Ack=177 Win=5888 Len=21 "MSG_3 ECHO" 24 2.082011 client server TCP 55879 > 6006 [PSH, ACK] Seq=177 Ack=106 Win=66672 Len=22 "MSG_4 RCV" 25 2.082362 client server TCP 55879 > 6006 [PSH, ACK] Seq=199 Ack=106 Win=66672 Len=22 "MSG_5 RCV" 26 2.082377 server client TCP 6006 > 55879 [ACK] Seq=169 Ack=221 Win=5888 Len=0 27 2.171003 client server TCP 55879 > 6006 [ACK] Seq=221 Ack=148 Win=66632 Len=0 28 2.171019 server client TCP 6006 > 55879 [PSH, ACK] Seq=169 Ack=221 Win=5888 Len=42 "MSG_4 ECHO + MSG_5 ECHO" 29 2.257498 client server TCP 55879 > 6006 [ACK] Seq=221 Ack=211 Win=66568 Len=0 so, I got confirmation things are not working correctly for RH: packet 28 is sent TOO LATE, it looks like the server is waiting for packet 27's ACK before doing anything. seems to me it's the most likely reason... then I realized that the "Win" parameters are different on Solaris & RH dumps: 50400 on Solaris, only 5888 on RH. that's another hint... I read the doc about the slide window & buffer window, and played around with the rcvBuffer & sendBuffer in java on my sockets, but never managed to change this 5888 value to anything else (I checked each time directly with tcpdump). does anybody know how to do this ? I'm having a hard time getting definitive information, as in some cases there's "auto-negotiation" that I might need to bypass, etc... I eventually managed to get only partially rid of my initial problem by setting the "tcp_slow_start_after_idle" parameter to 0 on RH, but it did not change the "win" parameter at all. the same problem was there for the first 4 groups of 5 messages, with TCP retransmission & TCP Dup ACK in tcpdump, then the problem disappeared altogether for all following groups of 5 messages. It doesn't seem like a very clean and/or generic solution to me. I'd really like to reproduce the exact same conditions under both OSes. I'll keep researching, but any help from TCP gurus would be greatly appreciated !
migrating Solaris to RH: network latency issue, tcp window size &amp; other tcp parameters I have a client/server app (Java) that I'm migrating from Solaris to RH Linux. since I started running it in RH, I noticed some issues related to latency. I managed to isolate the problem that looks like this: client sends 5 messages (32 bytes each) in a row (same application timestamp) to the server. server echos messages. client receives replies and prints round trip time for each msg. in Solaris, all is well: I get ALL 5 replies at the same time, roughly 80ms after having sent original messages (client & server are several thousands miles away from each other: my ping RTT is 80ms, all normal). in RH, first 3 messages are echoed normally (they arrive 80ms after they've been sent), however the following 2 arrive 80ms later (so total 160ms RTT). the pattern is always the same. clearly looked like a TCP problem. on my solaris box, I had previously configured the tcp stack with 2 specific options: disable nagle algorithm globally set tcp_deferred_acks_max to 0 on RH, it's not possible to disable nagle globally, but I disabled it on all of my apps' sockets (TCP_NODELAY). so I started playing with tcpdump (on the server machine), and compared both outputs: SOLARIS : 22 2.085645 client server TCP 56150 > 6006 [PSH, ACK] Seq=111 Ack=106 Win=66672 Len=22 "MSG_1 RCV" 23 2.085680 server client TCP 6006 > 56150 [ACK] Seq=106 Ack=133 Win=50400 Len=0 24 2.085908 client server TCP 56150 > 6006 [PSH, ACK] Seq=133 Ack=106 Win=66672 Len=22 "MSG_2 RCV" 25 2.085925 server client TCP 6006 > 56150 [ACK] Seq=106 Ack=155 Win=50400 Len=0 26 2.086175 client server TCP 56150 > 6006 [PSH, ACK] Seq=155 Ack=106 Win=66672 Len=22 "MSG_3 RCV" 27 2.086192 server client TCP 6006 > 56150 [ACK] Seq=106 Ack=177 Win=50400 Len=0 28 2.086243 server client TCP 6006 > 56150 [PSH, ACK] Seq=106 Ack=177 Win=50400 Len=21 "MSG_1 ECHO" 29 2.086440 client server TCP 56150 > 6006 [PSH, ACK] Seq=177 Ack=106 Win=66672 Len=22 "MSG_4 RCV" 30 2.086454 server client TCP 6006 > 56150 [ACK] Seq=127 Ack=199 Win=50400 Len=0 31 2.086659 server client TCP 6006 > 56150 [PSH, ACK] Seq=127 Ack=199 Win=50400 Len=21 "MSG_2 ECHO" 32 2.086708 client server TCP 56150 > 6006 [PSH, ACK] Seq=199 Ack=106 Win=66672 Len=22 "MSG_5 RCV" 33 2.086721 server client TCP 6006 > 56150 [ACK] Seq=148 Ack=221 Win=50400 Len=0 34 2.086947 server client TCP 6006 > 56150 [PSH, ACK] Seq=148 Ack=221 Win=50400 Len=21 "MSG_3 ECHO" 35 2.087196 server client TCP 6006 > 56150 [PSH, ACK] Seq=169 Ack=221 Win=50400 Len=21 "MSG_4 ECHO" 36 2.087500 server client TCP 6006 > 56150 [PSH, ACK] Seq=190 Ack=221 Win=50400 Len=21 "MSG_5 ECHO" 37 2.165390 client server TCP 56150 > 6006 [ACK] Seq=221 Ack=148 Win=66632 Len=0 38 2.166314 client server TCP 56150 > 6006 [ACK] Seq=221 Ack=190 Win=66588 Len=0 39 2.364135 client server TCP 56150 > 6006 [ACK] Seq=221 Ack=211 Win=66568 Len=0 REDHAT : 17 2.081163 client server TCP 55879 > 6006 [PSH, ACK] Seq=111 Ack=106 Win=66672 Len=22 "MSG_1 RCV" 18 2.081178 server client TCP 6006 > 55879 [ACK] Seq=106 Ack=133 Win=5888 Len=0 19 2.081297 server client TCP 6006 > 55879 [PSH, ACK] Seq=106 Ack=133 Win=5888 Len=21 "MSG_1 ECHO" 20 2.081711 client server TCP 55879 > 6006 [PSH, ACK] Seq=133 Ack=106 Win=66672 Len=22 "MSG_2 RCV" 21 2.081761 client server TCP 55879 > 6006 [PSH, ACK] Seq=155 Ack=106 Win=66672 Len=22 "MSG_3 RCV" 22 2.081846 server client TCP 6006 > 55879 [PSH, ACK] Seq=127 Ack=177 Win=5888 Len=21 "MSG_2 ECHO" 23 2.081995 server client TCP 6006 > 55879 [PSH, ACK] Seq=148 Ack=177 Win=5888 Len=21 "MSG_3 ECHO" 24 2.082011 client server TCP 55879 > 6006 [PSH, ACK] Seq=177 Ack=106 Win=66672 Len=22 "MSG_4 RCV" 25 2.082362 client server TCP 55879 > 6006 [PSH, ACK] Seq=199 Ack=106 Win=66672 Len=22 "MSG_5 RCV" 26 2.082377 server client TCP 6006 > 55879 [ACK] Seq=169 Ack=221 Win=5888 Len=0 27 2.171003 client server TCP 55879 > 6006 [ACK] Seq=221 Ack=148 Win=66632 Len=0 28 2.171019 server client TCP 6006 > 55879 [PSH, ACK] Seq=169 Ack=221 Win=5888 Len=42 "MSG_4 ECHO + MSG_5 ECHO" 29 2.257498 client server TCP 55879 > 6006 [ACK] Seq=221 Ack=211 Win=66568 Len=0 so, I got confirmation things are not working correctly for RH: packet 28 is sent TOO LATE, it looks like the server is waiting for packet 27's ACK before doing anything. seems to me it's the most likely reason... then I realized that the "Win" parameters are different on Solaris & RH dumps: 50400 on Solaris, only 5888 on RH. that's another hint... I read the doc about the slide window & buffer window, and played around with the rcvBuffer & sendBuffer in java on my sockets, but never managed to change this 5888 value to anything else (I checked each time directly with tcpdump). does anybody know how to do this ? I'm having a hard time getting definitive information, as in some cases there's "auto-negotiation" that I might need to bypass, etc... I eventually managed to get only partially rid of my initial problem by setting the "tcp_slow_start_after_idle" parameter to 0 on RH, but it did not change the "win" parameter at all. the same problem was there for the first 4 groups of 5 messages, with TCP retransmission & TCP Dup ACK in tcpdump, then the problem disappeared altogether for all following groups of 5 messages. It doesn't seem like a very clean and/or generic solution to me. I'd really like to reproduce the exact same conditions under both OSes. I'll keep researching, but any help from TCP gurus would be greatly appreciated !
java, network-programming, tcp, solaris, redhat
3
1,243
1
https://stackoverflow.com/questions/2995987/migrating-solaris-to-rh-network-latency-issue-tcp-window-size-other-tcp-para
2,356,448
Linking against NPTL for pthread function pthread_condattr_setclock
I've written some pthread code that use timed waits on a condition variable but in order to ensure a relative wait I've set the condvar's clock type to CLOCK_MONOTONIC using pthread_condattr_setclock(). In order to compile and link pthread_condattr_setclock() on RHEL4, i've had to add -I/usr/include/nptl and -L/usr/lib/nptl to my gcc command line. My understanding is that the 2.6 kernel (which RHEL4 has) uses the NPTL pthread implementation by default so why do I need to specify these paths explicitly to use this function? It's only this function that requires me to do this: if I leave it out, everything compiles and links fine without the extra paths specified (although the behaviour of the code is then incorrect).
Linking against NPTL for pthread function pthread_condattr_setclock I've written some pthread code that use timed waits on a condition variable but in order to ensure a relative wait I've set the condvar's clock type to CLOCK_MONOTONIC using pthread_condattr_setclock(). In order to compile and link pthread_condattr_setclock() on RHEL4, i've had to add -I/usr/include/nptl and -L/usr/lib/nptl to my gcc command line. My understanding is that the 2.6 kernel (which RHEL4 has) uses the NPTL pthread implementation by default so why do I need to specify these paths explicitly to use this function? It's only this function that requires me to do this: if I leave it out, everything compiles and links fine without the extra paths specified (although the behaviour of the code is then incorrect).
posix, pthreads, redhat, rhel, nptl
3
1,410
1
https://stackoverflow.com/questions/2356448/linking-against-nptl-for-pthread-function-pthread-condattr-setclock
76,238,004
How to compile a Perl file to be executable on multiple architectures?
I'm trying to compile Perl so that it can run on both Arm architecture and x86_64 machines, using pp . I see from the documentation that Perl has a -m or --multiarch option, which will compile a Perl file into a PAR that can be run on multiple architectures. -m, --multiarch Build a multi-architecture PAR file. Implies -p. -p, --par Create PAR archives only; do not package to a standalone binary. But as it says, this produces a PAR, not an executable. I don't know how to make this into an executable I can run on multiple architectures. So let's say I want to compile my simple hello.pl file: #!/usr/bin/perl use warnings; print("Bonjour!\n"); First I tried: pp -S -m -o hello hello.pl I thought that this would still convert hello.pl into a PAR and then convert the PAR into an executable, based on the example in the docs where it converts a .pl to a PAR, then a PAR into an executable in a single step: % pp -p file # Creates a PAR file, 'a.par' % pp -o hello a.par # Pack 'a.par' to executable 'hello' % pp -S -o hello file # Combine the two steps above But that's not what happens. The resulting output is still a ZIP: $ file hello hello: Zip archive data, at least v2.0 to extract How can I compile the .pl so that it results in an executable that runs on multiple architectures including Arm, not just x86? EDIT: The environment is RHE7 (Linux). I only want to run the perl scripts on Linux, not Mac or Windows, but want to target Linux machines that run both arm architecture and x86_64 architecture.
How to compile a Perl file to be executable on multiple architectures? I'm trying to compile Perl so that it can run on both Arm architecture and x86_64 machines, using pp . I see from the documentation that Perl has a -m or --multiarch option, which will compile a Perl file into a PAR that can be run on multiple architectures. -m, --multiarch Build a multi-architecture PAR file. Implies -p. -p, --par Create PAR archives only; do not package to a standalone binary. But as it says, this produces a PAR, not an executable. I don't know how to make this into an executable I can run on multiple architectures. So let's say I want to compile my simple hello.pl file: #!/usr/bin/perl use warnings; print("Bonjour!\n"); First I tried: pp -S -m -o hello hello.pl I thought that this would still convert hello.pl into a PAR and then convert the PAR into an executable, based on the example in the docs where it converts a .pl to a PAR, then a PAR into an executable in a single step: % pp -p file # Creates a PAR file, 'a.par' % pp -o hello a.par # Pack 'a.par' to executable 'hello' % pp -S -o hello file # Combine the two steps above But that's not what happens. The resulting output is still a ZIP: $ file hello hello: Zip archive data, at least v2.0 to extract How can I compile the .pl so that it results in an executable that runs on multiple architectures including Arm, not just x86? EDIT: The environment is RHE7 (Linux). I only want to run the perl scripts on Linux, not Mac or Windows, but want to target Linux machines that run both arm architecture and x86_64 architecture.
linux, perl, redhat, multiarch, perl-packager
3
412
0
https://stackoverflow.com/questions/76238004/how-to-compile-a-perl-file-to-be-executable-on-multiple-architectures
74,632,627
/proc/interrupts not showing all irqs
I am working on a program that fetches all irqs in /proc/irq and does some parsing. I've realized however that there are a bunch of irq numbers in /proc/irq that are not listed in /proc/interrupts. For example, on my system, /proc/irq includes directories for irqs 1, 2, 3, however these irqs do not show up in /proc/interrupts. What is the reason for this?
/proc/interrupts not showing all irqs I am working on a program that fetches all irqs in /proc/irq and does some parsing. I've realized however that there are a bunch of irq numbers in /proc/irq that are not listed in /proc/interrupts. For example, on my system, /proc/irq includes directories for irqs 1, 2, 3, however these irqs do not show up in /proc/interrupts. What is the reason for this?
linux, redhat, interrupt, procfs, irq
3
325
0
https://stackoverflow.com/questions/74632627/proc-interrupts-not-showing-all-irqs
72,301,204
ERROR: Could not build wheels for cx_Oracle, which is required to install pyproject.toml-based projects
I am trying to install cx_Oracle in Redhat Linux and facing the below error. I have tried many ways, downgrading the python from 3.9 to 3.8, upgrading setuptools. Nothing has resolved it. Collecting cx_Oracle Using cached cx_Oracle-8.3.0.tar.gz (363 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Building wheels for collected packages: cx_Oracle Building wheel for cx_Oracle (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for cx_Oracle (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [13 lines of output] running bdist_wheel running build running build_ext building 'cx_Oracle' extension creating build creating build/temp.linux-aarch64-cpython-38 creating build/temp.linux-aarch64-cpython-38/odpi creating build/temp.linux-aarch64-cpython-38/odpi/src creating build/temp.linux-aarch64-cpython-38/src gcc -pthread -Wno-unused-result -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -fasynchronous-unwind-tables -fstack-clash-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -fasynchronous-unwind-tables -fstack-clash-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -fasynchronous-unwind-tables -fstack-clash-protection -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DCXO_BUILD_VERSION=8.3.0 -Iodpi/include -Iodpi/src -I/usr/include/python3.8 -c odpi/src/dpiConn.c -o build/temp.linux-aarch64-cpython-38/odpi/src/dpiConn.o /tmp/pip-build-env-zlrr4oab/overlay/lib/python3.8/site-packages/setuptools/config/expand.py:144: UserWarning: File '/tmp/pip-install-apbhk3ix/cx-oracle_0d4f0759bf0349f8bee939c5b9282345/README.md' cannot be found warnings.warn(f"File {path!r} cannot be found") error: command 'gcc' failed: No such file or directory [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for cx_Oracle Failed to build cx_Oracle ERROR: Could not build wheels for cx_Oracle, which is required to install pyproject.toml-based projects What can I try next?
ERROR: Could not build wheels for cx_Oracle, which is required to install pyproject.toml-based projects I am trying to install cx_Oracle in Redhat Linux and facing the below error. I have tried many ways, downgrading the python from 3.9 to 3.8, upgrading setuptools. Nothing has resolved it. Collecting cx_Oracle Using cached cx_Oracle-8.3.0.tar.gz (363 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Building wheels for collected packages: cx_Oracle Building wheel for cx_Oracle (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for cx_Oracle (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [13 lines of output] running bdist_wheel running build running build_ext building 'cx_Oracle' extension creating build creating build/temp.linux-aarch64-cpython-38 creating build/temp.linux-aarch64-cpython-38/odpi creating build/temp.linux-aarch64-cpython-38/odpi/src creating build/temp.linux-aarch64-cpython-38/src gcc -pthread -Wno-unused-result -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -fasynchronous-unwind-tables -fstack-clash-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -fasynchronous-unwind-tables -fstack-clash-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -fasynchronous-unwind-tables -fstack-clash-protection -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DCXO_BUILD_VERSION=8.3.0 -Iodpi/include -Iodpi/src -I/usr/include/python3.8 -c odpi/src/dpiConn.c -o build/temp.linux-aarch64-cpython-38/odpi/src/dpiConn.o /tmp/pip-build-env-zlrr4oab/overlay/lib/python3.8/site-packages/setuptools/config/expand.py:144: UserWarning: File '/tmp/pip-install-apbhk3ix/cx-oracle_0d4f0759bf0349f8bee939c5b9282345/README.md' cannot be found warnings.warn(f"File {path!r} cannot be found") error: command 'gcc' failed: No such file or directory [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for cx_Oracle Failed to build cx_Oracle ERROR: Could not build wheels for cx_Oracle, which is required to install pyproject.toml-based projects What can I try next?
python, pip, redhat, cx-oracle
3
4,168
0
https://stackoverflow.com/questions/72301204/error-could-not-build-wheels-for-cx-oracle-which-is-required-to-install-pyproj
70,579,788
How to set up dependencies between two different user scope services in systemd?
I am using systemd to run Podman containers as services: Kafka via kafka user and managing service via: systemctl start/stop --user kafka.service Zookeeper via zookeeper user and managing service via: systemctl start/stop --user zookeeper.service Files reside in locations: Kafka: /home/kafka/.config/systemd/user/podman-kafka-pod.service , Zookeeper: /home/zookeeper/.config/systemd/user/podman-zookeeper-pod.service . Both are separate, user-scoped systemd services. What I'd like to achieve to set up dependency to Zookeeper from Kafka: restart/start Zookeeper when Kafka is restarted/started. I am not sure if it's possible to define such relations between user-scoped services. I tried adding: Requires=podman-zookeeper-pod.service Before=podman-zookeeper-pod.service To Kafka service, but it fails with the following error: $ systemctl restart --user podman-kafka-pod Failed to restart podman-kafka-pod.service: Unit podman-zookeeper-pod.service not found. Fair enough, because I cannot list Zookeeper service from Kafka user and vice-versa, it's expected I think. Is it possible to set up dependencies between services being run by different users in systemd ? I could not find the answer on SO or in the documentation. Thanks in advance for a reply.
How to set up dependencies between two different user scope services in systemd? I am using systemd to run Podman containers as services: Kafka via kafka user and managing service via: systemctl start/stop --user kafka.service Zookeeper via zookeeper user and managing service via: systemctl start/stop --user zookeeper.service Files reside in locations: Kafka: /home/kafka/.config/systemd/user/podman-kafka-pod.service , Zookeeper: /home/zookeeper/.config/systemd/user/podman-zookeeper-pod.service . Both are separate, user-scoped systemd services. What I'd like to achieve to set up dependency to Zookeeper from Kafka: restart/start Zookeeper when Kafka is restarted/started. I am not sure if it's possible to define such relations between user-scoped services. I tried adding: Requires=podman-zookeeper-pod.service Before=podman-zookeeper-pod.service To Kafka service, but it fails with the following error: $ systemctl restart --user podman-kafka-pod Failed to restart podman-kafka-pod.service: Unit podman-zookeeper-pod.service not found. Fair enough, because I cannot list Zookeeper service from Kafka user and vice-versa, it's expected I think. Is it possible to set up dependencies between services being run by different users in systemd ? I could not find the answer on SO or in the documentation. Thanks in advance for a reply.
service, redhat, systemd, podman
3
160
0
https://stackoverflow.com/questions/70579788/how-to-set-up-dependencies-between-two-different-user-scope-services-in-systemd
68,194,796
setting value in cgroup&#39;s cpu.rt_runtime_us with ansible
I am setting a value on cpu.rt_runtime_us with echo command, like: echo 950000 > /sys/fs/cgroup/cpu,cpuacct/user.slice/cpu.rt_runtime_us It is on Redhat OS. Is there more elegant way to configure it? I'd like to use it in an ansible playook and I don't like using shell task and run this command every time playbook is run. Appreciate any good proposal or a good practice for this kind of setting from ansible.
setting value in cgroup&#39;s cpu.rt_runtime_us with ansible I am setting a value on cpu.rt_runtime_us with echo command, like: echo 950000 > /sys/fs/cgroup/cpu,cpuacct/user.slice/cpu.rt_runtime_us It is on Redhat OS. Is there more elegant way to configure it? I'd like to use it in an ansible playook and I don't like using shell task and run this command every time playbook is run. Appreciate any good proposal or a good practice for this kind of setting from ansible.
ansible, redhat
3
1,171
0
https://stackoverflow.com/questions/68194796/setting-value-in-cgroups-cpu-rt-runtime-us-with-ansible
66,758,986
Installing Docker on Redhat AWS EC2 Instance(RHEL_7.9)
I have created an Redhat EC2 Instance in AWS. I am trying to install Jenkins as a Docker Image inside that Redhat EC2 Instance. I am following the below URl to install Docker on AWS [URL] But I am facing issue after adding that repositoy, I guess Yum is not able to get the repository Failed to set locale, defaulting to C Loaded plugins: amazon-id, search-disabled-repos [URL] [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. To address this issue please refer to the below knowledge base article [URL] If above article doesn't help to resolve this issue please open a ticket with Red Hat Support. One of the configured repositories failed (Docker CE Stable - x86_64), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=docker-ce-stable ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable docker-ce-stable or subscription-manager repos --disable=docker-ce-stable 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=docker-ce-stable.skip_if_unavailable=true failure: repodata/repomd.xml from docker-ce-stable: [Errno 256] No more mirrors to try. [URL] [Errno 14] HTTPS Error 404 - Not Found I tried running the following command after that error(just hit and trail) yum-config-manager --save --setopt=docker-ce-stable.skip_if_unavailable=true But its not able to find No package docker-ce available. No package docker-ce-cli available. No package containerd.io available. Error: Nothing to do Can someone help me with some documentation or any blog to install docker on Redhat platform I am Using RHEL_7.9 version Thanks in Advance
Installing Docker on Redhat AWS EC2 Instance(RHEL_7.9) I have created an Redhat EC2 Instance in AWS. I am trying to install Jenkins as a Docker Image inside that Redhat EC2 Instance. I am following the below URl to install Docker on AWS [URL] But I am facing issue after adding that repositoy, I guess Yum is not able to get the repository Failed to set locale, defaulting to C Loaded plugins: amazon-id, search-disabled-repos [URL] [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. To address this issue please refer to the below knowledge base article [URL] If above article doesn't help to resolve this issue please open a ticket with Red Hat Support. One of the configured repositories failed (Docker CE Stable - x86_64), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=docker-ce-stable ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable docker-ce-stable or subscription-manager repos --disable=docker-ce-stable 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=docker-ce-stable.skip_if_unavailable=true failure: repodata/repomd.xml from docker-ce-stable: [Errno 256] No more mirrors to try. [URL] [Errno 14] HTTPS Error 404 - Not Found I tried running the following command after that error(just hit and trail) yum-config-manager --save --setopt=docker-ce-stable.skip_if_unavailable=true But its not able to find No package docker-ce available. No package docker-ce-cli available. No package containerd.io available. Error: Nothing to do Can someone help me with some documentation or any blog to install docker on Redhat platform I am Using RHEL_7.9 version Thanks in Advance
docker, amazon-ec2, redhat
3
531
0
https://stackoverflow.com/questions/66758986/installing-docker-on-redhat-aws-ec2-instancerhel-7-9
66,289,880
Airflow Openshift installation with Dockerfile
I tried to install the Airflow via my own image at a public dockerhub, but it works perfect locally, but when I tried to use it on Openshift. I got this error bellow. `ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/.local' Check the permissions. My Dockerfile working for on Windows and Ubuntu. # VERSION 2.0.0 # AUTHOR: Bruno # DESCRIPTION: Basic Airflow container FROM python:3.8-slim-buster LABEL maintainer="Bruno" # Never prompt the user for choices on installation/configuration of packages ENV DEBIAN_FRONTEND noninteractive ENV TERM linux COPY requirements.txt . RUN pip install --user -r requirements.txt --no-cache-dir # Airflow ARG AIRFLOW_VERSION=2.0.0 ARG AIRFLOW_USER_HOME=/usr/local/airflow ENV AIRFLOW_HOME=${AIRFLOW_USER_HOME} # Define en_US. ENV LANGUAGE en_US.UTF-8 ENV LANG en_US.UTF-8 ENV LC_ALL en_US.UTF-8 ENV LC_CTYPE en_US.UTF-8 ENV LC_MESSAGES en_US.UTF-8 # Disable noisy "Handling signal" log messages: # ENV GUNICORN_CMD_ARGS --log-level WARNING RUN set -ex \ && buildDeps=' \ freetds-dev \ libkrb5-dev \ libsasl2-dev \ libssl-dev \ libffi-dev \ libpq-dev \ git \ ' \ && apt-get update -yqq \ && apt-get upgrade -yqq \ && apt-get install -yqq --no-install-recommends \ $buildDeps \ freetds-bin \ build-essential \ default-libmysqlclient-dev \ apt-utils \ curl \ rsync \ netcat \ locales \ && sed -i 's/^# en_US.UTF-8 UTF-8$/en_US.UTF-8 UTF-8/g' /etc/locale.gen \ && locale-gen \ && update-locale LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 \ && useradd -ms /bin/bash -d ${AIRFLOW_USER_HOME} airflow \ && pip install -U pip setuptools wheel \ && pip install pytz \ && pip install pyOpenSSL \ && pip install ndg-httpsclient \ && pip install pyasn1 \ && pip install apache-airflow[crypto,celery,postgres,kubernetes,hive,jdbc,mysql,ssh${AIRFLOW_DEPS:+,}${AIRFLOW_DEPS}]==${AIRFLOW_VERSION} \ && pip install 'redis==3.2' \ && if [ -n "${PYTHON_DEPS}" ]; then pip install ${PYTHON_DEPS}; fi \ && apt-get purge --auto-remove -yqq $buildDeps \ && apt-get autoremove -yqq --purge \ && apt-get clean \ && rm -rf \ /var/lib/apt/lists/* \ /tmp/* \ /var/tmp/* \ /usr/share/man \ /usr/share/doc \ /usr/share/doc-base COPY entrypoint.sh /entrypoint.sh COPY airflow.cfg ${AIRFLOW_USER_HOME}/airflow.cfg RUN chown -R airflow: ${AIRFLOW_USER_HOME} EXPOSE 8080 5555 8793 USER airflow WORKDIR ${AIRFLOW_USER_HOME} ENTRYPOINT ["/entrypoint.sh"] CMD ["webserver"]
Airflow Openshift installation with Dockerfile I tried to install the Airflow via my own image at a public dockerhub, but it works perfect locally, but when I tried to use it on Openshift. I got this error bellow. `ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/.local' Check the permissions. My Dockerfile working for on Windows and Ubuntu. # VERSION 2.0.0 # AUTHOR: Bruno # DESCRIPTION: Basic Airflow container FROM python:3.8-slim-buster LABEL maintainer="Bruno" # Never prompt the user for choices on installation/configuration of packages ENV DEBIAN_FRONTEND noninteractive ENV TERM linux COPY requirements.txt . RUN pip install --user -r requirements.txt --no-cache-dir # Airflow ARG AIRFLOW_VERSION=2.0.0 ARG AIRFLOW_USER_HOME=/usr/local/airflow ENV AIRFLOW_HOME=${AIRFLOW_USER_HOME} # Define en_US. ENV LANGUAGE en_US.UTF-8 ENV LANG en_US.UTF-8 ENV LC_ALL en_US.UTF-8 ENV LC_CTYPE en_US.UTF-8 ENV LC_MESSAGES en_US.UTF-8 # Disable noisy "Handling signal" log messages: # ENV GUNICORN_CMD_ARGS --log-level WARNING RUN set -ex \ && buildDeps=' \ freetds-dev \ libkrb5-dev \ libsasl2-dev \ libssl-dev \ libffi-dev \ libpq-dev \ git \ ' \ && apt-get update -yqq \ && apt-get upgrade -yqq \ && apt-get install -yqq --no-install-recommends \ $buildDeps \ freetds-bin \ build-essential \ default-libmysqlclient-dev \ apt-utils \ curl \ rsync \ netcat \ locales \ && sed -i 's/^# en_US.UTF-8 UTF-8$/en_US.UTF-8 UTF-8/g' /etc/locale.gen \ && locale-gen \ && update-locale LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 \ && useradd -ms /bin/bash -d ${AIRFLOW_USER_HOME} airflow \ && pip install -U pip setuptools wheel \ && pip install pytz \ && pip install pyOpenSSL \ && pip install ndg-httpsclient \ && pip install pyasn1 \ && pip install apache-airflow[crypto,celery,postgres,kubernetes,hive,jdbc,mysql,ssh${AIRFLOW_DEPS:+,}${AIRFLOW_DEPS}]==${AIRFLOW_VERSION} \ && pip install 'redis==3.2' \ && if [ -n "${PYTHON_DEPS}" ]; then pip install ${PYTHON_DEPS}; fi \ && apt-get purge --auto-remove -yqq $buildDeps \ && apt-get autoremove -yqq --purge \ && apt-get clean \ && rm -rf \ /var/lib/apt/lists/* \ /tmp/* \ /var/tmp/* \ /usr/share/man \ /usr/share/doc \ /usr/share/doc-base COPY entrypoint.sh /entrypoint.sh COPY airflow.cfg ${AIRFLOW_USER_HOME}/airflow.cfg RUN chown -R airflow: ${AIRFLOW_USER_HOME} EXPOSE 8080 5555 8793 USER airflow WORKDIR ${AIRFLOW_USER_HOME} ENTRYPOINT ["/entrypoint.sh"] CMD ["webserver"]
dockerfile, openshift, airflow, redhat, redhat-containers
3
1,194
1
https://stackoverflow.com/questions/66289880/airflow-openshift-installation-with-dockerfile
65,907,417
SQL Server crashes when trying to backup certificate
I am trying to create backup certificate from primary to create certificates on secondary nodes for an availability group. Using the following: CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password123'; CREATE CERTIFICATE dbm_certificate WITH SUBJECT = 'dbm'; BACKUP CERTIFICATE dbm_certificate TO FILE = 'dbm_certificate.cer' WITH PRIVATE KEY ( FILE = 'dbm_certificate.pvk', ENCRYPTION BY PASSWORD = '123password' ); The master key and certificate creation are successful but the BACKUP CERTIFICATE returns: A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) This seems to be caused by a core dump by sql giving the following log: This program has encountered a fatal error and cannot continue running at Tue Jan 26 12:25:05 2021 The following diagnostic information is available: Reason: 0x00000001 Signal: SIGABRT - Aborted (6) Stack: IP Function ---------------- -------------------------------------- 0000559b81252023 malloc_usable_size+0x9e103 0000559b81251afe malloc_usable_size+0x9dbde 0000559b8125111a malloc_usable_size+0x9d1fa 00007fbabc90d400 __restore_rt+0x0 00007fbabc90d387 gsignal+0x37 00007fbabc90ea78 abort+0x148 00007fbabb5ecc8f OpenSSLDie+0x1f 00007fbabb6aadcc bad_do_cipher+0x1c 00007fbabb6aafba EVP_EncryptUpdate+0xda 0000559b8120c8ce malloc_usable_size+0x589ae 0000559b8120c4c2 malloc_usable_size+0x585a2 0000559b811d60d5 malloc_usable_size+0x221b5 0000559b811d5d99 malloc_usable_size+0x21e79 Process: 11391 - sqlservr Thread: 11513 (application thread 0x1c4) Instance Id: 2b6201eb-eeba-4f82-8007-7e3ef630be1a Crash Id: 98fa6ae2-ac80-4454-839a-06ffba21260a Build stamp: 86f25b9af3192b748396bd75b5bf3eceb3e2e62a8c2271521d281f5a53463d38 Distribution: Red Hat Enterprise Linux Processors: 4 Total Memory: 8370020352 bytes Timestamp: Tue Jan 26 12:25:05 2021 Removing the WITH PRIVATE KEY allows the command to succeed creating dbm_certificate in the data folder. This lead me to believe the issue was with OPENSSL and the encryption of private key. I have installed MSSQL 2019 on Red Hat 7.9 with OPENSSL 1.0.2k. I have created symlinks to OPENSSL in /opt/mssql/lib as well as adding : [Service] Environment="LD_LIBRARY_PATH=/opt/mssql/lib" to the mssql-server service.
SQL Server crashes when trying to backup certificate I am trying to create backup certificate from primary to create certificates on secondary nodes for an availability group. Using the following: CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password123'; CREATE CERTIFICATE dbm_certificate WITH SUBJECT = 'dbm'; BACKUP CERTIFICATE dbm_certificate TO FILE = 'dbm_certificate.cer' WITH PRIVATE KEY ( FILE = 'dbm_certificate.pvk', ENCRYPTION BY PASSWORD = '123password' ); The master key and certificate creation are successful but the BACKUP CERTIFICATE returns: A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) This seems to be caused by a core dump by sql giving the following log: This program has encountered a fatal error and cannot continue running at Tue Jan 26 12:25:05 2021 The following diagnostic information is available: Reason: 0x00000001 Signal: SIGABRT - Aborted (6) Stack: IP Function ---------------- -------------------------------------- 0000559b81252023 malloc_usable_size+0x9e103 0000559b81251afe malloc_usable_size+0x9dbde 0000559b8125111a malloc_usable_size+0x9d1fa 00007fbabc90d400 __restore_rt+0x0 00007fbabc90d387 gsignal+0x37 00007fbabc90ea78 abort+0x148 00007fbabb5ecc8f OpenSSLDie+0x1f 00007fbabb6aadcc bad_do_cipher+0x1c 00007fbabb6aafba EVP_EncryptUpdate+0xda 0000559b8120c8ce malloc_usable_size+0x589ae 0000559b8120c4c2 malloc_usable_size+0x585a2 0000559b811d60d5 malloc_usable_size+0x221b5 0000559b811d5d99 malloc_usable_size+0x21e79 Process: 11391 - sqlservr Thread: 11513 (application thread 0x1c4) Instance Id: 2b6201eb-eeba-4f82-8007-7e3ef630be1a Crash Id: 98fa6ae2-ac80-4454-839a-06ffba21260a Build stamp: 86f25b9af3192b748396bd75b5bf3eceb3e2e62a8c2271521d281f5a53463d38 Distribution: Red Hat Enterprise Linux Processors: 4 Total Memory: 8370020352 bytes Timestamp: Tue Jan 26 12:25:05 2021 Removing the WITH PRIVATE KEY allows the command to succeed creating dbm_certificate in the data folder. This lead me to believe the issue was with OPENSSL and the encryption of private key. I have installed MSSQL 2019 on Red Hat 7.9 with OPENSSL 1.0.2k. I have created symlinks to OPENSSL in /opt/mssql/lib as well as adding : [Service] Environment="LD_LIBRARY_PATH=/opt/mssql/lib" to the mssql-server service.
sql-server, openssl, redhat
3
189
0
https://stackoverflow.com/questions/65907417/sql-server-crashes-when-trying-to-backup-certificate
64,859,338
Trouble installing &quot;xml2&quot; on R (from Redhat)
I tried installing the packages "tidyverse" and "brms" on R, using the Redhat version of Linux, but had an error message related to "xml2" for both. Example when trying to install R package "tidyverse": * installing *source* package ‘xml2’ ... ** package ‘xml2’ successfully unpacked and MD5 sums checked ** using staged installation ERROR: 'configure' exists but is not executable -- see the 'R Installation and Administration Manual' * removing ‘/home/davidb/R/x86_64-redhat-linux-gnu-library/4.0/xml2’ ERROR: dependency ‘xml2’ is not available for package ‘rvest’ * removing ‘/home/davidb/R/x86_64-redhat-linux-gnu-library/4.0/rvest’ ERROR: dependencies ‘rvest’, ‘xml2’ are not available for package ‘tidyverse’ * removing ‘/home/davidb/R/x86_64-redhat-linux-gnu-library/4.0/tidyverse’ Based on what I found on the internet, I understood that xml2 depends on libxml2, which I installed with the following command: sudo yum install -y libcurl-devel openssl-devel libxml2-devel After that, when opening R and trying install.packages("xml2") , I still got an error message: * installing *source* package ‘xml2’ ... ** package ‘xml2’ successfully unpacked and MD5 sums checked ** using staged installation ERROR: 'configure' exists but is not executable -- see the 'R Installation and Administration Manual' * removiError: unexpected '>' in ">" ng ‘/home/davidb/R/x86_64-redhat-linux-gnu-library/4.0/xml2’[CODE_BLOCK]> R.Version() $platform [1] "x86_64-redhat-linux-gnu" $arch [1] "x86_64" $os [1] "linux-gnu" $system [1] "x86_64, linux-gnu" $status [1] "" $major [1] "4" $minor [1] "0.3" $year [1] "2020" $month [1] "10" $day [1] "10" $svn rev [1] "79318" $language [1] "R" $version.string [1] "R version 4.0.3 (2020-10-10)" $nickname [1] "Bunny-Wunnies Freak Out"```
Trouble installing &quot;xml2&quot; on R (from Redhat) I tried installing the packages "tidyverse" and "brms" on R, using the Redhat version of Linux, but had an error message related to "xml2" for both. Example when trying to install R package "tidyverse": * installing *source* package ‘xml2’ ... ** package ‘xml2’ successfully unpacked and MD5 sums checked ** using staged installation ERROR: 'configure' exists but is not executable -- see the 'R Installation and Administration Manual' * removing ‘/home/davidb/R/x86_64-redhat-linux-gnu-library/4.0/xml2’ ERROR: dependency ‘xml2’ is not available for package ‘rvest’ * removing ‘/home/davidb/R/x86_64-redhat-linux-gnu-library/4.0/rvest’ ERROR: dependencies ‘rvest’, ‘xml2’ are not available for package ‘tidyverse’ * removing ‘/home/davidb/R/x86_64-redhat-linux-gnu-library/4.0/tidyverse’ Based on what I found on the internet, I understood that xml2 depends on libxml2, which I installed with the following command: sudo yum install -y libcurl-devel openssl-devel libxml2-devel After that, when opening R and trying install.packages("xml2") , I still got an error message: * installing *source* package ‘xml2’ ... ** package ‘xml2’ successfully unpacked and MD5 sums checked ** using staged installation ERROR: 'configure' exists but is not executable -- see the 'R Installation and Administration Manual' * removiError: unexpected '>' in ">" ng ‘/home/davidb/R/x86_64-redhat-linux-gnu-library/4.0/xml2’[CODE_BLOCK]> R.Version() $platform [1] "x86_64-redhat-linux-gnu" $arch [1] "x86_64" $os [1] "linux-gnu" $system [1] "x86_64, linux-gnu" $status [1] "" $major [1] "4" $minor [1] "0.3" $year [1] "2020" $month [1] "10" $day [1] "10" $svn rev [1] "79318" $language [1] "R" $version.string [1] "R version 4.0.3 (2020-10-10)" $nickname [1] "Bunny-Wunnies Freak Out"```
r, tidyverse, redhat, rvest, xml2
3
675
0
https://stackoverflow.com/questions/64859338/trouble-installing-xml2-on-r-from-redhat
64,538,200
Keycloak WFLYCTL0362: Capabilities required by resource
I'm trying to run rh-sso-7/sso74-openshift-rhel8 container in Openshift and it says the following: 13:03:20,612 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=infinispan/cache-container=web/transport=jgroups' are not available: org.wildfly.clustering.jgroups.default-channel-factory; Possible registration points for this capability: /subsystem=jgroups 13:03:20,613 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=infinispan/cache-container=server/transport=jgroups' are not available: org.wildfly.clustering.jgroups.default-channel-factory; Possible registration points for this capability: /subsystem=jgroups 13:03:20,614 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=infinispan/cache-container=keycloak/transport=jgroups' are not available: org.wildfly.clustering.jgroups.default-channel-factory; Possible registration points for this capability: /subsystem=jgroups 13:03:20,614 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=infinispan/cache-container=ejb/transport=jgroups' are not available: org.wildfly.clustering.jgroups.default-channel-factory; Possible registration points for this capability: /subsystem=jgroups 13:03:20,614 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=infinispan/cache-container=hibernate/transport=jgroups' are not available: org.wildfly.clustering.jgroups.default-channel-factory; Possible registration points for this capability: /subsystem=jgroups 13:03:20,716 FATAL [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0056: Server boot has failed in an unrecoverable manner; exiting. See previous messages for details. Please, help.
Keycloak WFLYCTL0362: Capabilities required by resource I'm trying to run rh-sso-7/sso74-openshift-rhel8 container in Openshift and it says the following: 13:03:20,612 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=infinispan/cache-container=web/transport=jgroups' are not available: org.wildfly.clustering.jgroups.default-channel-factory; Possible registration points for this capability: /subsystem=jgroups 13:03:20,613 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=infinispan/cache-container=server/transport=jgroups' are not available: org.wildfly.clustering.jgroups.default-channel-factory; Possible registration points for this capability: /subsystem=jgroups 13:03:20,614 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=infinispan/cache-container=keycloak/transport=jgroups' are not available: org.wildfly.clustering.jgroups.default-channel-factory; Possible registration points for this capability: /subsystem=jgroups 13:03:20,614 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=infinispan/cache-container=ejb/transport=jgroups' are not available: org.wildfly.clustering.jgroups.default-channel-factory; Possible registration points for this capability: /subsystem=jgroups 13:03:20,614 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=infinispan/cache-container=hibernate/transport=jgroups' are not available: org.wildfly.clustering.jgroups.default-channel-factory; Possible registration points for this capability: /subsystem=jgroups 13:03:20,716 FATAL [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0056: Server boot has failed in an unrecoverable manner; exiting. See previous messages for details. Please, help.
openshift, single-sign-on, keycloak, redhat, redhat-containers
3
1,018
0
https://stackoverflow.com/questions/64538200/keycloak-wflyctl0362-capabilities-required-by-resource
62,593,479
An existing directory is not mounted with newly created LV in LVM Partition
Suppose there was a LV called lv_dbbackup under vg_root which is mounted under /db_backup . But recently for official purpose I created a new VG called vg_backup and by unmounting /db_backup from previously created vg_root-lv_backup i want to use /db_backup to newly created vg_backup-lv_backup . But the problem occurred when i unmounted /db_backup from existing vg_root-lv_backup and try to mount /db_backup with newly created vg_backup-lv_backup . In that case /db_backup is not mounted with newly created vg_backup-lv_backup but when i create another directory for example: /test and try to mount vg_backup-lv_backup with newly created directory(/test or whatever except /db_backup) it works properly. I'm using ext4 file system and Its a mission critical system so that i'm unable to change /db_backup mount point cause if i change, then it will be a nightmare to Database Team.
An existing directory is not mounted with newly created LV in LVM Partition Suppose there was a LV called lv_dbbackup under vg_root which is mounted under /db_backup . But recently for official purpose I created a new VG called vg_backup and by unmounting /db_backup from previously created vg_root-lv_backup i want to use /db_backup to newly created vg_backup-lv_backup . But the problem occurred when i unmounted /db_backup from existing vg_root-lv_backup and try to mount /db_backup with newly created vg_backup-lv_backup . In that case /db_backup is not mounted with newly created vg_backup-lv_backup but when i create another directory for example: /test and try to mount vg_backup-lv_backup with newly created directory(/test or whatever except /db_backup) it works properly. I'm using ext4 file system and Its a mission critical system so that i'm unable to change /db_backup mount point cause if i change, then it will be a nightmare to Database Team.
linux, redhat, rhel7, rhel6, lvm
3
433
0
https://stackoverflow.com/questions/62593479/an-existing-directory-is-not-mounted-with-newly-created-lv-in-lvm-partition
61,946,688
JxBrower7.7 start timeOut error,Red Hat Enterprise Linux Server release 7.6 (Maipo)
linux: [root@localhost bin]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.6 (Maipo) [root@localhost bin]# cat /proc/version Linux version 4.14.0-115.5.1.el7a.06.aarch64 (mockbuild@arm-buildhost1) (gcc version 4.8.5 20150623 (NeoKylin 4.8.5-36) (GCC)) #1 SMP Tue Jun 18 10:34:55 CST 2019 [root@localhost bin]# file /bin/bash /bin/bash: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 3.7.0, BuildID[sha1]=8a346ec01d611062313a5a4ed2b0201ecc9d9fa1, stripped JxBrower7.7: i used this demo,the line 55 is:Browser browser = engine.newBrowser(); enter code here public static void main(String[] args) { Engine engine = Engine.newInstance( EngineOptions.newBuilder(OFF_SCREEN).build()); Browser browser = engine.newBrowser(); enter code here [root@localhost bin]# java -jar test.jar Exception in thread "main" com.teamdev.jxbrowser.navigation.TimeoutException: Failed to execute task withing 45 seconds. at com.teamdev.jxbrowser.navigation.internal.NavigationImpl.loadAndWait(NavigationImpl.java:248) at com.teamdev.jxbrowser.navigation.internal.NavigationImpl.loadUrlAndWait(NavigationImpl.java:105) at com.teamdev.jxbrowser.navigation.internal.NavigationImpl.loadUrlAndWait(NavigationImpl.java:82) at com.teamdev.jxbrowser.navigation.internal.NavigationImpl.loadUrlAndWait(NavigationImpl.java:74) at com.teamdev.jxbrowser.engine.internal.EngineImpl.newBrowser(EngineImpl.java:458) at com.pinnet.HelloWorld.main(HelloWorld.java:55) linux logs at /var/logs/messages: 22 09:48:53 localhost dbus[8661]: [system] Activating via systemd: service name='org.bluez' unit='dbus-org.bluez.service' May 22 09:48:54 localhost abrt-hook-ccpp: Process 90562 (chromium) of user 0 killed by SIGABRT - dumping core May 22 09:48:54 localhost abrt-hook-ccpp: Process 90566 (chromium) of user 0 killed by SIGABRT - ignoring (repeated crash) May 22 09:48:54 localhost abrt-hook-ccpp: Process 90561 (chromium) of user 0 killed by SIGABRT - ignoring (repeated crash) May 22 09:48:54 localhost abrt-hook-ccpp: Process 90593 (chromium) of user 0 killed by SIGABRT - ignoring (repeated crash) May 22 09:48:55 localhost abrt-hook-ccpp: Process 90624 (chromium) of user 0 killed by SIGABRT - ignoring (repeated crash) May 22 09:48:55 localhost abrt-hook-ccpp: Process 90623 (chromium) of user 0 killed by SIGABRT - ignoring (repeated crash) May 22 09:48:56 localhost abrt-server: Duplicate: core backtrace May 22 09:48:56 localhost abrt-server: DUP_OF_DIR: /var/spool/abrt/ccpp-2020-05-21-16:55:06-33694 May 22 09:48:56 localhost abrt-server: Deleting problem directory ccpp-2020-05-22-09:48:54-90562 (dup of ccpp-2020-05-21-16:55:06-33694) May 22 09:48:56 localhost abrt-server: /bin/sh: reporter-mailx: 未找到命令 May 22 09:49:18 localhost dbus[8661]: [system] Failed to activate service 'org.bluez': timed out
JxBrower7.7 start timeOut error,Red Hat Enterprise Linux Server release 7.6 (Maipo) linux: [root@localhost bin]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.6 (Maipo) [root@localhost bin]# cat /proc/version Linux version 4.14.0-115.5.1.el7a.06.aarch64 (mockbuild@arm-buildhost1) (gcc version 4.8.5 20150623 (NeoKylin 4.8.5-36) (GCC)) #1 SMP Tue Jun 18 10:34:55 CST 2019 [root@localhost bin]# file /bin/bash /bin/bash: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 3.7.0, BuildID[sha1]=8a346ec01d611062313a5a4ed2b0201ecc9d9fa1, stripped JxBrower7.7: i used this demo,the line 55 is:Browser browser = engine.newBrowser(); enter code here public static void main(String[] args) { Engine engine = Engine.newInstance( EngineOptions.newBuilder(OFF_SCREEN).build()); Browser browser = engine.newBrowser(); enter code here [root@localhost bin]# java -jar test.jar Exception in thread "main" com.teamdev.jxbrowser.navigation.TimeoutException: Failed to execute task withing 45 seconds. at com.teamdev.jxbrowser.navigation.internal.NavigationImpl.loadAndWait(NavigationImpl.java:248) at com.teamdev.jxbrowser.navigation.internal.NavigationImpl.loadUrlAndWait(NavigationImpl.java:105) at com.teamdev.jxbrowser.navigation.internal.NavigationImpl.loadUrlAndWait(NavigationImpl.java:82) at com.teamdev.jxbrowser.navigation.internal.NavigationImpl.loadUrlAndWait(NavigationImpl.java:74) at com.teamdev.jxbrowser.engine.internal.EngineImpl.newBrowser(EngineImpl.java:458) at com.pinnet.HelloWorld.main(HelloWorld.java:55) linux logs at /var/logs/messages: 22 09:48:53 localhost dbus[8661]: [system] Activating via systemd: service name='org.bluez' unit='dbus-org.bluez.service' May 22 09:48:54 localhost abrt-hook-ccpp: Process 90562 (chromium) of user 0 killed by SIGABRT - dumping core May 22 09:48:54 localhost abrt-hook-ccpp: Process 90566 (chromium) of user 0 killed by SIGABRT - ignoring (repeated crash) May 22 09:48:54 localhost abrt-hook-ccpp: Process 90561 (chromium) of user 0 killed by SIGABRT - ignoring (repeated crash) May 22 09:48:54 localhost abrt-hook-ccpp: Process 90593 (chromium) of user 0 killed by SIGABRT - ignoring (repeated crash) May 22 09:48:55 localhost abrt-hook-ccpp: Process 90624 (chromium) of user 0 killed by SIGABRT - ignoring (repeated crash) May 22 09:48:55 localhost abrt-hook-ccpp: Process 90623 (chromium) of user 0 killed by SIGABRT - ignoring (repeated crash) May 22 09:48:56 localhost abrt-server: Duplicate: core backtrace May 22 09:48:56 localhost abrt-server: DUP_OF_DIR: /var/spool/abrt/ccpp-2020-05-21-16:55:06-33694 May 22 09:48:56 localhost abrt-server: Deleting problem directory ccpp-2020-05-22-09:48:54-90562 (dup of ccpp-2020-05-21-16:55:06-33694) May 22 09:48:56 localhost abrt-server: /bin/sh: reporter-mailx: 未找到命令 May 22 09:49:18 localhost dbus[8661]: [system] Failed to activate service 'org.bluez': timed out
linux, arm, redhat, jxbrowser
3
378
0
https://stackoverflow.com/questions/61946688/jxbrower7-7-start-timeout-error-red-hat-enterprise-linux-server-release-7-6-mai
59,751,410
Unable to find the libnl RPM package to satisfy CrowdStrike Falcon Sensor dependency on RHEL 8
I have to install falcon-sensor rpm package for Crowdstrike to be present on server and it needed libnl RPM package as dependencies. I can get it from [URL] as below and able to install on RHEL8. #dnf install [URL] but i do not want to download it from rpm finder website. I would like to do it using redhat repository similarly we do for other rpm packages ex. telnet, #yum install libnl Whenever i am hitting above command getting below error. No match for argument: libnl Error: Unable to find a match: libnl I have tried enabling below repositories of RHEL 8. codeready-builder-for-rhel-8-rhui-rpms Red Hat CodeReady L enabled: 1,842 codeready-builder-for-rhel-8-rhui-source-rpms Red Hat CodeReady L enabled: 489 *epel Extra Packages for enabled: 4,401 rhel-8-appstream-rhui-rpms Red Hat Enterprise enabled: 8,420 rhel-8-baseos-rhui-rpms Red Hat Enterprise enabled: 3,378 rhel-8-baseos-rhui-source-rpms Red Hat Enterprise enabled: 779 rhui-client-config-server-8 Red Hat Update Infr enabled: 5 How can i get libnl rpm by enabling repositories in RHEL 8?
Unable to find the libnl RPM package to satisfy CrowdStrike Falcon Sensor dependency on RHEL 8 I have to install falcon-sensor rpm package for Crowdstrike to be present on server and it needed libnl RPM package as dependencies. I can get it from [URL] as below and able to install on RHEL8. #dnf install [URL] but i do not want to download it from rpm finder website. I would like to do it using redhat repository similarly we do for other rpm packages ex. telnet, #yum install libnl Whenever i am hitting above command getting below error. No match for argument: libnl Error: Unable to find a match: libnl I have tried enabling below repositories of RHEL 8. codeready-builder-for-rhel-8-rhui-rpms Red Hat CodeReady L enabled: 1,842 codeready-builder-for-rhel-8-rhui-source-rpms Red Hat CodeReady L enabled: 489 *epel Extra Packages for enabled: 4,401 rhel-8-appstream-rhui-rpms Red Hat Enterprise enabled: 8,420 rhel-8-baseos-rhui-rpms Red Hat Enterprise enabled: 3,378 rhel-8-baseos-rhui-source-rpms Red Hat Enterprise enabled: 779 rhui-client-config-server-8 Red Hat Update Infr enabled: 5 How can i get libnl rpm by enabling repositories in RHEL 8?
linux, package, redhat, rpm, yum
3
16,708
2
https://stackoverflow.com/questions/59751410/unable-to-find-the-libnl-rpm-package-to-satisfy-crowdstrike-falcon-sensor-depend
58,612,861
redhat pg_upgrade permission errors
duplicate of unanswered question: CentOS 7 pg_upgrade Permissions Errors when trying to run pg_upgrade, it won't let me run the command as root. when exiting from root, it can't access files due to permissions. how can this be solved? command i'm running: /usr/pgsql-12/bin/pg_upgrade --old-bindir=/usr/pgsql-10/bin/ --new-bindir=/usr/pgsql-12/bin/ --old-datadir=/var/lib/pgsql/10/data/ --new-datadir=/var/lib/pgsql/12/data/ error message when running command as root: pg_upgrade: cannot be run as root Failure, exiting error message when not running command as root: could not open version file: /var/lib/pgsql/10/data/PG_VERSION Failure, exiting also tried to run command as postgres user su postgres /usr/pgsql-12/bin/pg_upgrade --old-bindir=/usr/pgsql-10/bin/ --new-bindir=/usr/pgsql-12/bin/ --old-datadir=/var/lib/pgsql/10/data/ --new-datadir=/var/lib/pgsql/12/data could not change directory to "/home/j.d": Permission denied could not open log file "pg_upgrade_internal.log": Permission denied Failure, exiting
redhat pg_upgrade permission errors duplicate of unanswered question: CentOS 7 pg_upgrade Permissions Errors when trying to run pg_upgrade, it won't let me run the command as root. when exiting from root, it can't access files due to permissions. how can this be solved? command i'm running: /usr/pgsql-12/bin/pg_upgrade --old-bindir=/usr/pgsql-10/bin/ --new-bindir=/usr/pgsql-12/bin/ --old-datadir=/var/lib/pgsql/10/data/ --new-datadir=/var/lib/pgsql/12/data/ error message when running command as root: pg_upgrade: cannot be run as root Failure, exiting error message when not running command as root: could not open version file: /var/lib/pgsql/10/data/PG_VERSION Failure, exiting also tried to run command as postgres user su postgres /usr/pgsql-12/bin/pg_upgrade --old-bindir=/usr/pgsql-10/bin/ --new-bindir=/usr/pgsql-12/bin/ --old-datadir=/var/lib/pgsql/10/data/ --new-datadir=/var/lib/pgsql/12/data could not change directory to "/home/j.d": Permission denied could not open log file "pg_upgrade_internal.log": Permission denied Failure, exiting
postgresql, redhat, pg-upgrade
3
1,903
0
https://stackoverflow.com/questions/58612861/redhat-pg-upgrade-permission-errors
57,661,877
RequestDumperValve breaks POST request
I have an historic ColdFusion 9 server running on top of JBoss 5.1 on Scientific Linux 6.2. Every once in a while I see the error 2019-08-20 12:15:30,621 ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[localhost].[/REDACTED].[CfmServlet]] (ajp-0.0.0.0-8009-8) Servlet.service() for servlet CfmServlet threw exception javax.servlet.ServletException: ROOT CAUSE: java.lang.IllegalArgumentException at coldfusion.filter.FormScope.parseQueryString(FormScope.java:375) at coldfusion.filter.FormScope.parsePostData(FormScope.java:346) at coldfusion.filter.FormScope.fillForm(FormScope.java:296) at coldfusion.filter.FusionContext.SymTab_initForRequest(FusionContext.java:377) in the file /var/log/jboss/server.log To find out what seems to be the problem I thought it's somehow possible to log the POST params JBoss received and is trying to prepare for ColdFusion to use. On the internet I read I should go to file /opt/jboss/server/default/deploy/jbossweb.sar/server.xml and uncomment the line <Valve className="org.apache.catalina.valves.RequestDumperValve" /> . Now, the params (cookie, header, POST etc.) are indeed logged into the server.log file. The ColdFusion server, however, does not do its task anymore. I open CFADMIN in the browser and enter the password. I'm not let in. I, again, see the log-on page. Same is true for my application. I see in the server.log file the parameters (username and password) are correct. They are logged in clear text. There's a story on the internet that describes how the RequestDumperValve destroys a request by applying wrong encoding. Does something like this happen to me? Are there other possibilities to log the POST params in JBoss?
RequestDumperValve breaks POST request I have an historic ColdFusion 9 server running on top of JBoss 5.1 on Scientific Linux 6.2. Every once in a while I see the error 2019-08-20 12:15:30,621 ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[localhost].[/REDACTED].[CfmServlet]] (ajp-0.0.0.0-8009-8) Servlet.service() for servlet CfmServlet threw exception javax.servlet.ServletException: ROOT CAUSE: java.lang.IllegalArgumentException at coldfusion.filter.FormScope.parseQueryString(FormScope.java:375) at coldfusion.filter.FormScope.parsePostData(FormScope.java:346) at coldfusion.filter.FormScope.fillForm(FormScope.java:296) at coldfusion.filter.FusionContext.SymTab_initForRequest(FusionContext.java:377) in the file /var/log/jboss/server.log To find out what seems to be the problem I thought it's somehow possible to log the POST params JBoss received and is trying to prepare for ColdFusion to use. On the internet I read I should go to file /opt/jboss/server/default/deploy/jbossweb.sar/server.xml and uncomment the line <Valve className="org.apache.catalina.valves.RequestDumperValve" /> . Now, the params (cookie, header, POST etc.) are indeed logged into the server.log file. The ColdFusion server, however, does not do its task anymore. I open CFADMIN in the browser and enter the password. I'm not let in. I, again, see the log-on page. Same is true for my application. I see in the server.log file the parameters (username and password) are correct. They are logged in clear text. There's a story on the internet that describes how the RequestDumperValve destroys a request by applying wrong encoding. Does something like this happen to me? Are there other possibilities to log the POST params in JBoss?
coldfusion, redhat, coldfusion-9, jboss5.x
3
145
0
https://stackoverflow.com/questions/57661877/requestdumpervalve-breaks-post-request
55,192,179
OpenJDK 1.8.0_202 with CentOS 7: libpng12.so.0: cannot open shared object file:
I'm using the latest OpenJDK release: $ ./jdk/jre/bin/java -version openjdk version "1.8.0_202" OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_202-b08) OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.202-b08, mixed mode) I'm getting the linkage error below: Exception in thread "main" java.lang.UnsatisfiedLinkError: /usr/local/apps/jdk/jre/lib/amd64/libfontmanager.so: libpng12.so.0: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1845) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at sun.font.FontManagerNativeLibrary$1.run(FontManagerNativeLibrary.java:61) at java.security.AccessController.doPrivileged(Native Method) at sun.font.FontManagerNativeLibrary.<clinit>(FontManagerNativeLibrary.java:32) at sun.java2d.xr.XRSurfaceData.initXRSurfaceData(XRSurfaceData.java:85) at sun.awt.X11GraphicsEnvironment$1.run(X11GraphicsEnvironment.java:137) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.X11GraphicsEnvironment.<clinit>(X11GraphicsEnvironment.java:74) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at java.awt.GraphicsEnvironment.createGE(GraphicsEnvironment.java:103) at java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment(GraphicsEnvironment.java:82) at sun.awt.X11.XToolkit.<clinit>(XToolkit.java:132) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at java.awt.Toolkit$2.run(Toolkit.java:860) at java.awt.Toolkit$2.run(Toolkit.java:855) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:854) at sun.swing.SwingUtilities2.getSystemMnemonicKeyMask(SwingUtilities2.java:2020) at javax.swing.plaf.basic.BasicLookAndFeel.initComponentDefaults(BasicLookAndFeel.java:1158) at javax.swing.plaf.metal.MetalLookAndFeel.initComponentDefaults(MetalLookAndFeel.java:431) at javax.swing.plaf.basic.BasicLookAndFeel.getDefaults(BasicLookAndFeel.java:148) at javax.swing.plaf.metal.MetalLookAndFeel.getDefaults(MetalLookAndFeel.java:1577) at javax.swing.UIManager.setLookAndFeel(UIManager.java:539) at javax.swing.UIManager.setLookAndFeel(UIManager.java:579) at javax.swing.UIManager.initializeDefaultLAF(UIManager.java:1349) at javax.swing.UIManager.initialize(UIManager.java:1459) at javax.swing.UIManager.maybeInitialize(UIManager.java:1426) at javax.swing.UIManager.getUI(UIManager.java:1006) at javax.swing.JPanel.updateUI(JPanel.java:126) at javax.swing.JPanel.<init>(JPanel.java:86) at javax.swing.JPanel.<init>(JPanel.java:109) at javax.swing.JPanel.<init>(JPanel.java:117) at javax.swing.JRootPane.createGlassPane(JRootPane.java:546) at javax.swing.JRootPane.<init>(JRootPane.java:366) at javax.swing.JApplet.createRootPane(JApplet.java:161) at javax.swing.JApplet.<init>(JApplet.java:149) Tested with the following OSs: Red Hat 7.4 CentoS 7.4 CentOS 7.0 I'm guessing installing libpng12.x86_64 would make it work. But is this normal or is there an issue with the latest release of OpenJDK? Thanks
OpenJDK 1.8.0_202 with CentOS 7: libpng12.so.0: cannot open shared object file: I'm using the latest OpenJDK release: $ ./jdk/jre/bin/java -version openjdk version "1.8.0_202" OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_202-b08) OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.202-b08, mixed mode) I'm getting the linkage error below: Exception in thread "main" java.lang.UnsatisfiedLinkError: /usr/local/apps/jdk/jre/lib/amd64/libfontmanager.so: libpng12.so.0: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1845) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at sun.font.FontManagerNativeLibrary$1.run(FontManagerNativeLibrary.java:61) at java.security.AccessController.doPrivileged(Native Method) at sun.font.FontManagerNativeLibrary.<clinit>(FontManagerNativeLibrary.java:32) at sun.java2d.xr.XRSurfaceData.initXRSurfaceData(XRSurfaceData.java:85) at sun.awt.X11GraphicsEnvironment$1.run(X11GraphicsEnvironment.java:137) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.X11GraphicsEnvironment.<clinit>(X11GraphicsEnvironment.java:74) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at java.awt.GraphicsEnvironment.createGE(GraphicsEnvironment.java:103) at java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment(GraphicsEnvironment.java:82) at sun.awt.X11.XToolkit.<clinit>(XToolkit.java:132) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at java.awt.Toolkit$2.run(Toolkit.java:860) at java.awt.Toolkit$2.run(Toolkit.java:855) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:854) at sun.swing.SwingUtilities2.getSystemMnemonicKeyMask(SwingUtilities2.java:2020) at javax.swing.plaf.basic.BasicLookAndFeel.initComponentDefaults(BasicLookAndFeel.java:1158) at javax.swing.plaf.metal.MetalLookAndFeel.initComponentDefaults(MetalLookAndFeel.java:431) at javax.swing.plaf.basic.BasicLookAndFeel.getDefaults(BasicLookAndFeel.java:148) at javax.swing.plaf.metal.MetalLookAndFeel.getDefaults(MetalLookAndFeel.java:1577) at javax.swing.UIManager.setLookAndFeel(UIManager.java:539) at javax.swing.UIManager.setLookAndFeel(UIManager.java:579) at javax.swing.UIManager.initializeDefaultLAF(UIManager.java:1349) at javax.swing.UIManager.initialize(UIManager.java:1459) at javax.swing.UIManager.maybeInitialize(UIManager.java:1426) at javax.swing.UIManager.getUI(UIManager.java:1006) at javax.swing.JPanel.updateUI(JPanel.java:126) at javax.swing.JPanel.<init>(JPanel.java:86) at javax.swing.JPanel.<init>(JPanel.java:109) at javax.swing.JPanel.<init>(JPanel.java:117) at javax.swing.JRootPane.createGlassPane(JRootPane.java:546) at javax.swing.JRootPane.<init>(JRootPane.java:366) at javax.swing.JApplet.createRootPane(JApplet.java:161) at javax.swing.JApplet.<init>(JApplet.java:149) Tested with the following OSs: Red Hat 7.4 CentoS 7.4 CentOS 7.0 I'm guessing installing libpng12.x86_64 would make it work. But is this normal or is there an issue with the latest release of OpenJDK? Thanks
java, centos, redhat
3
2,524
2
https://stackoverflow.com/questions/55192179/openjdk-1-8-0-202-with-centos-7-libpng12-so-0-cannot-open-shared-object-file
54,227,180
OpenLDAP does not validate TLS certificate
I am trying to run OpenLDAP (2.4.44 on RedHat 7.6) as a client against an existing LDAP server with TLS. This is working well - too well, actually. It looks to me as if OpenLDAP accepts any server certificate, instead of validating it against the CAs I provided. Here is my ldap.conf file: TLS_CACERT /etc/openldap/cacerts/ldap-2019.pem TLS_REQCERT demand URI ldaps://ldap.mydomain.com/ BASE ou=people,dc=mydomain,dc=com # Some optimizations suggested by # [URL] set_cachesize 0 268435456 1 set_lg_regionmax 262144 set_lg_bsize 2097152 What I want to accomplish is of course that OpenLDAP validates the certificate for ldaps://ldap.mydomain.com against the list of CAs in TLS_CACERT. But in reality, no matter what I put into the TLS_CACERT file, openldap seems to connect successfully, just as long as it is a valid PEM file. What am I missing? Is there a second list of CAs that OpenLDAP consults? I also removed the CAs in /etc/pki/tls, just in case. More details: ldapsearch -x -uid=somename fails if I delete the file I specified in TLS_CACERT. It also fails if TLS_CACERT is not a valid PEM file. This is of course expected behavior when the client cannot validate a TLS certificate: ldapsearch -x uid=somename ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) But if I put a completely nonsensical certificate into the TLS_CACERT file, ldapsearch will return a result as if the server's certificate was valid. ldapsearch -x uid=somename # extended LDIF # # LDAPv3 # base <ou=people,dc=mydomain,dc=com> (default) with scope subtree # filter: uid=somename # requesting: ALL # # somename, People, mydomain.com dn: uid=somename,ou=People,dc=mydomain,dc=com ... For example, I tried using a certificate for www.google.com as a TLS_CACERT. I would have expected this connection to fail with the same "Can't contact LDAP server" error. Update: I found the cause but not the solution. OpenLDAP uses the certificate bundle in /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem . Renaming this file causes ldapsearch to behave as I expected, but of course that is not an acceptable solution since this is a systemwide file, not just for OpenLDAP. So my new question is: how do I prevent OpenLDAP from using this file? Update 2: For clarification, this is on RedHat 7.6, and OpenLDAP 2.4.44. I assume that using the systemwide CA bundle is a RedHat modification to the stock OpenLDAP.
OpenLDAP does not validate TLS certificate I am trying to run OpenLDAP (2.4.44 on RedHat 7.6) as a client against an existing LDAP server with TLS. This is working well - too well, actually. It looks to me as if OpenLDAP accepts any server certificate, instead of validating it against the CAs I provided. Here is my ldap.conf file: TLS_CACERT /etc/openldap/cacerts/ldap-2019.pem TLS_REQCERT demand URI ldaps://ldap.mydomain.com/ BASE ou=people,dc=mydomain,dc=com # Some optimizations suggested by # [URL] set_cachesize 0 268435456 1 set_lg_regionmax 262144 set_lg_bsize 2097152 What I want to accomplish is of course that OpenLDAP validates the certificate for ldaps://ldap.mydomain.com against the list of CAs in TLS_CACERT. But in reality, no matter what I put into the TLS_CACERT file, openldap seems to connect successfully, just as long as it is a valid PEM file. What am I missing? Is there a second list of CAs that OpenLDAP consults? I also removed the CAs in /etc/pki/tls, just in case. More details: ldapsearch -x -uid=somename fails if I delete the file I specified in TLS_CACERT. It also fails if TLS_CACERT is not a valid PEM file. This is of course expected behavior when the client cannot validate a TLS certificate: ldapsearch -x uid=somename ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) But if I put a completely nonsensical certificate into the TLS_CACERT file, ldapsearch will return a result as if the server's certificate was valid. ldapsearch -x uid=somename # extended LDIF # # LDAPv3 # base <ou=people,dc=mydomain,dc=com> (default) with scope subtree # filter: uid=somename # requesting: ALL # # somename, People, mydomain.com dn: uid=somename,ou=People,dc=mydomain,dc=com ... For example, I tried using a certificate for www.google.com as a TLS_CACERT. I would have expected this connection to fail with the same "Can't contact LDAP server" error. Update: I found the cause but not the solution. OpenLDAP uses the certificate bundle in /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem . Renaming this file causes ldapsearch to behave as I expected, but of course that is not an acceptable solution since this is a systemwide file, not just for OpenLDAP. So my new question is: how do I prevent OpenLDAP from using this file? Update 2: For clarification, this is on RedHat 7.6, and OpenLDAP 2.4.44. I assume that using the systemwide CA bundle is a RedHat modification to the stock OpenLDAP.
ssl, redhat, openldap
3
2,629
0
https://stackoverflow.com/questions/54227180/openldap-does-not-validate-tls-certificate
49,480,315
How do I check user command history with TIME / Linux?
How do I check WHEN did a user issue a command? I can see a command being issued in .bash_history , but I'd like to know WHEN it was issued? I'm aware of the export HISTTIMEFORMAT='%F %T command, but that only logs AFTER I've issued the command.
How do I check user command history with TIME / Linux? How do I check WHEN did a user issue a command? I can see a command being issued in .bash_history , but I'd like to know WHEN it was issued? I'm aware of the export HISTTIMEFORMAT='%F %T command, but that only logs AFTER I've issued the command.
bash, centos, redhat
3
1,578
1
https://stackoverflow.com/questions/49480315/how-do-i-check-user-command-history-with-time-linux
48,409,354
How to detect the multiple webservers running in an auto-scaling environment on AWS?
Is there any way to detect number of EC2 webservers running on a machine by checking the /var/log directory in redhat 6? I checked on the internet which says about using cloudwatch to monitor logs using a script. But I don't know how to use it. How should I detect the number of EC2 webservers without using cloudwatch?
How to detect the multiple webservers running in an auto-scaling environment on AWS? Is there any way to detect number of EC2 webservers running on a machine by checking the /var/log directory in redhat 6? I checked on the internet which says about using cloudwatch to monitor logs using a script. But I don't know how to use it. How should I detect the number of EC2 webservers without using cloudwatch?
amazon-web-services, amazon-ec2, redhat
3
49
0
https://stackoverflow.com/questions/48409354/how-to-detect-the-multiple-webservers-running-in-an-auto-scaling-environment-on
47,806,322
lock-on-active not working as expected
Rules are not fired even Once when lock-on-active is set true. Should it be fired once? I expect the Rule 1 to be fired once when using lock-on-active. (Note: I have added the codes used to execute the rules ) Rule rule "Rule 1" lock-on-active true ruleflow-group "Group A" when $c: Product() then System.out.println("Rule 1"); modify($c) { setAmount(1); } end rule "Rule 2" lock-on-active true ruleflow-group "Group A" when $c: Product() then System.out.println("Rule 2"); modify($c){ setAmount($c.getAmount()+1) } end Code for executing rules KieServices kieServices=KieServices.Factory.get(); KieContainer kieContainer=kieServices.getKieClasspathContainer(); KieSession kieSession=kieContainer.newKieSession("ksession-lockOnActive"); Product product=new Product(); product.setName("Book"); product.setAmount(5); ((InternalAgenda)kieSession.getAgenda()).activateRuleFlowGroup("Group A"); kieSession.insert(product); kieSession.fireAllRules(); kieSession.dispose();
lock-on-active not working as expected Rules are not fired even Once when lock-on-active is set true. Should it be fired once? I expect the Rule 1 to be fired once when using lock-on-active. (Note: I have added the codes used to execute the rules ) Rule rule "Rule 1" lock-on-active true ruleflow-group "Group A" when $c: Product() then System.out.println("Rule 1"); modify($c) { setAmount(1); } end rule "Rule 2" lock-on-active true ruleflow-group "Group A" when $c: Product() then System.out.println("Rule 2"); modify($c){ setAmount($c.getAmount()+1) } end Code for executing rules KieServices kieServices=KieServices.Factory.get(); KieContainer kieContainer=kieServices.getKieClasspathContainer(); KieSession kieSession=kieContainer.newKieSession("ksession-lockOnActive"); Product product=new Product(); product.setName("Book"); product.setAmount(5); ((InternalAgenda)kieSession.getAgenda()).activateRuleFlowGroup("Group A"); kieSession.insert(product); kieSession.fireAllRules(); kieSession.dispose();
jboss, drools, redhat, business-process-management
3
588
1
https://stackoverflow.com/questions/47806322/lock-on-active-not-working-as-expected
46,036,461
flask-wtf: TypeError: b&#39;ab37f9dc28822383e290c6fc1188c39f2ab7ff97&#39; is not JSON serializable
I am trying to run my flask application on a Linux RedHat machine (Python 3.4) using Apache and CGI . However i am getting this error: Traceback (most recent call last): File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1982, in wsgi_app response = self.full_dispatch_request() File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1614, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1517, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/lib64/python3.4/site-packages/flask/_compat.py", line 33, in reraise raise value File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1612, in full_dispatch_request rv = self.dispatch_request() File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1598, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/usr/lib64/python3.4/site-packages/flask_login/utils.py", line 228, in decorated_view return func(*args, **kwargs) File "/var/www/html/efa-mobile2/mct/app/views.py", line 55, in content_home filter_form = Filter_Form() File "/usr/lib/python3.4/site-packages/wtforms/form.py", line 212, in __call__ return type.__call__(cls, *args, **kwargs) File "/usr/lib64/python3.4/site-packages/flask_wtf/form.py", line 88, in __init__ super(FlaskForm, self).__init__(formdata=formdata, **kwargs) File "/usr/lib/python3.4/site-packages/wtforms/form.py", line 278, in __init__ self.process(formdata, obj, data=data, **kwargs) File "/usr/lib/python3.4/site-packages/wtforms/form.py", line 132, in process field.process(formdata) File "/usr/lib/python3.4/site-packages/wtforms/csrf/core.py", line 43, in process self.current_token = self.csrf_impl.generate_csrf_token(self) File "/usr/lib64/python3.4/site-packages/flask_wtf/csrf.py", line 134, in generate_csrf_token token_key=self.meta.csrf_field_name File "/usr/lib64/python3.4/site-packages/flask_wtf/csrf.py", line 47, in generate_csrf setattr(g, field_name, s.dumps(session[field_name])) File "/usr/lib/python3.4/site-packages/itsdangerous.py", line 565, in dumps payload = want_bytes(self.dump_payload(obj)) File "/usr/lib/python3.4/site-packages/itsdangerous.py", line 847, in dump_payload json = super(URLSafeSerializerMixin, self).dump_payload(obj) File "/usr/lib/python3.4/site-packages/itsdangerous.py", line 550, in dump_payload return want_bytes(self.serializer.dumps(obj)) return json.dumps(obj, separators=(',', ':')) File "/usr/lib64/python3.4/json/__init__.py", line 237, in dumps **kw).encode(obj) File "/usr/lib64/python3.4/json/encoder.py", line 192, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib64/python3.4/json/encoder.py", line 250, in iterencode return _iterencode(o, 0) File "/usr/lib64/python3.4/json/encoder.py", line 173, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: b'ab37f9dc28822383e290c6fc1188c39f2ab7ff97' is not JSON serializable I installed the flask-application on another Linux Server (Debian 8) using the same settings etc. and it works perfectly . Can anyone help me with that?
flask-wtf: TypeError: b&#39;ab37f9dc28822383e290c6fc1188c39f2ab7ff97&#39; is not JSON serializable I am trying to run my flask application on a Linux RedHat machine (Python 3.4) using Apache and CGI . However i am getting this error: Traceback (most recent call last): File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1982, in wsgi_app response = self.full_dispatch_request() File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1614, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1517, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/lib64/python3.4/site-packages/flask/_compat.py", line 33, in reraise raise value File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1612, in full_dispatch_request rv = self.dispatch_request() File "/usr/lib64/python3.4/site-packages/flask/app.py", line 1598, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/usr/lib64/python3.4/site-packages/flask_login/utils.py", line 228, in decorated_view return func(*args, **kwargs) File "/var/www/html/efa-mobile2/mct/app/views.py", line 55, in content_home filter_form = Filter_Form() File "/usr/lib/python3.4/site-packages/wtforms/form.py", line 212, in __call__ return type.__call__(cls, *args, **kwargs) File "/usr/lib64/python3.4/site-packages/flask_wtf/form.py", line 88, in __init__ super(FlaskForm, self).__init__(formdata=formdata, **kwargs) File "/usr/lib/python3.4/site-packages/wtforms/form.py", line 278, in __init__ self.process(formdata, obj, data=data, **kwargs) File "/usr/lib/python3.4/site-packages/wtforms/form.py", line 132, in process field.process(formdata) File "/usr/lib/python3.4/site-packages/wtforms/csrf/core.py", line 43, in process self.current_token = self.csrf_impl.generate_csrf_token(self) File "/usr/lib64/python3.4/site-packages/flask_wtf/csrf.py", line 134, in generate_csrf_token token_key=self.meta.csrf_field_name File "/usr/lib64/python3.4/site-packages/flask_wtf/csrf.py", line 47, in generate_csrf setattr(g, field_name, s.dumps(session[field_name])) File "/usr/lib/python3.4/site-packages/itsdangerous.py", line 565, in dumps payload = want_bytes(self.dump_payload(obj)) File "/usr/lib/python3.4/site-packages/itsdangerous.py", line 847, in dump_payload json = super(URLSafeSerializerMixin, self).dump_payload(obj) File "/usr/lib/python3.4/site-packages/itsdangerous.py", line 550, in dump_payload return want_bytes(self.serializer.dumps(obj)) return json.dumps(obj, separators=(',', ':')) File "/usr/lib64/python3.4/json/__init__.py", line 237, in dumps **kw).encode(obj) File "/usr/lib64/python3.4/json/encoder.py", line 192, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib64/python3.4/json/encoder.py", line 250, in iterencode return _iterencode(o, 0) File "/usr/lib64/python3.4/json/encoder.py", line 173, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: b'ab37f9dc28822383e290c6fc1188c39f2ab7ff97' is not JSON serializable I installed the flask-application on another Linux Server (Debian 8) using the same settings etc. and it works perfectly . Can anyone help me with that?
python, linux, typeerror, redhat, flask-wtforms
3
982
0
https://stackoverflow.com/questions/46036461/flask-wtf-typeerror-bab37f9dc28822383e290c6fc1188c39f2ab7ff97-is-not-json-se
45,459,319
Sum of RSS memory in ps less than memory actually used
We have two machines with identical configuration and use (we have two balanced Siebel application servers in them). Normally, we have a very similar RAM usage in them (around 7 Gb). Recently, we've have a sudden increase of RAM in only one of them and now we have close to 14 Gb utilization of RAM in that machine. So, for very similar boxes, we have one of them using 7Gb of RAM while the other one is consuming 14 Gb. Now, using ps aux command to determine which process it's using all this additional memory, we see memory consumption is very similar in both machines. Somehow, we don't see any process that's using those 7 Gb of additional RAM. Let's see: Machine 1: total used free shared buffers cached Mem: 15943 15739 204 0 221 1267 -/+ buffers/cache: 14249 1693 Swap: 8191 0 8191 So, we have 14249 Mb usage of RAM. Machine 2: total used free shared buffers cached Mem: 15943 15636 306 0 962 6409 -/+ buffers/cache: 8264 7678 Swap: 8191 0 8191 So, we have 8264 Mb usage of RAM. I guess, the sum of Resident Set Size memory of ps should be equal or bigger to this value. According to this answer is how much memory is allocated to the process and is in RAM (including memory from shared libraries). We don't have any memory in SWAP. However: Machine 1: ps aux | awk 'BEGIN {sum=0} {sum +=$6} END {print sum/1024}' 8357.08 8357.08 < 14249 -> NOK! Machine 2: ps aux | awk 'BEGIN {sum=0} {sum +=$6} END {print sum/1024}' 8468.63 8468.63 > 8264 -> OK What do I get wrong? How can I find where this "missing" memory is? Thank you in advance
Sum of RSS memory in ps less than memory actually used We have two machines with identical configuration and use (we have two balanced Siebel application servers in them). Normally, we have a very similar RAM usage in them (around 7 Gb). Recently, we've have a sudden increase of RAM in only one of them and now we have close to 14 Gb utilization of RAM in that machine. So, for very similar boxes, we have one of them using 7Gb of RAM while the other one is consuming 14 Gb. Now, using ps aux command to determine which process it's using all this additional memory, we see memory consumption is very similar in both machines. Somehow, we don't see any process that's using those 7 Gb of additional RAM. Let's see: Machine 1: total used free shared buffers cached Mem: 15943 15739 204 0 221 1267 -/+ buffers/cache: 14249 1693 Swap: 8191 0 8191 So, we have 14249 Mb usage of RAM. Machine 2: total used free shared buffers cached Mem: 15943 15636 306 0 962 6409 -/+ buffers/cache: 8264 7678 Swap: 8191 0 8191 So, we have 8264 Mb usage of RAM. I guess, the sum of Resident Set Size memory of ps should be equal or bigger to this value. According to this answer is how much memory is allocated to the process and is in RAM (including memory from shared libraries). We don't have any memory in SWAP. However: Machine 1: ps aux | awk 'BEGIN {sum=0} {sum +=$6} END {print sum/1024}' 8357.08 8357.08 < 14249 -> NOK! Machine 2: ps aux | awk 'BEGIN {sum=0} {sum +=$6} END {print sum/1024}' 8468.63 8468.63 > 8264 -> OK What do I get wrong? How can I find where this "missing" memory is? Thank you in advance
linux, memory, memory-management, redhat, ps
3
3,725
1
https://stackoverflow.com/questions/45459319/sum-of-rss-memory-in-ps-less-than-memory-actually-used
45,087,584
Java Not able to create a file in windows shared folder from Linux
I am trying to create a file in a shared folder using the below code. I am able to do it when i run this code on windows. But however when i run the same code on linux it is not working. In liunx it is creating a file named "\192.168.1.102\share\1.pdf" in the folder where i run this java code instead of creating a file 1.pdf in the shared folder "\192.168.1.102\share\". It seems like while running on Linux the server was not identifying the path as a shared location, Instead it reads that as it's local path. Are there any other ways to create a file in the shared folder? Could anyone please help me in resolving this? public class Test { public static void main(String args[]) { String s1 ="\\\\192.168.1.102\\share"; try{ FileOutputStream fos = new FileOutputStream(s1+"\\1.pdf"); fos.write(("Testing Success").getBytes()); fos.close(); } catch(Exception e){ e.printStackTrace(); System.out.println(e.toString()); } File file = new File(s1); System.out.println(file.exists()); } }
Java Not able to create a file in windows shared folder from Linux I am trying to create a file in a shared folder using the below code. I am able to do it when i run this code on windows. But however when i run the same code on linux it is not working. In liunx it is creating a file named "\192.168.1.102\share\1.pdf" in the folder where i run this java code instead of creating a file 1.pdf in the shared folder "\192.168.1.102\share\". It seems like while running on Linux the server was not identifying the path as a shared location, Instead it reads that as it's local path. Are there any other ways to create a file in the shared folder? Could anyone please help me in resolving this? public class Test { public static void main(String args[]) { String s1 ="\\\\192.168.1.102\\share"; try{ FileOutputStream fos = new FileOutputStream(s1+"\\1.pdf"); fos.write(("Testing Success").getBytes()); fos.close(); } catch(Exception e){ e.printStackTrace(); System.out.println(e.toString()); } File file = new File(s1); System.out.println(file.exists()); } }
java, linux, redhat
3
1,635
4
https://stackoverflow.com/questions/45087584/java-not-able-to-create-a-file-in-windows-shared-folder-from-linux
45,051,952
Setting timeout for custom service (chkconfig)
I wrote a custom script that I want to be executed each time my server starts and stops. I added the script with chkconfig: chkconfig --add myservice This is working fine. But from time to time I run into timeout. I check the systemctl settings and I can see the timeout is set to 5 min: systemctl show myservice.service | grep Timeout TimeoutStartUSec=5min TimeoutStopUSec=5min JobTimeoutUSec=0 JobTimeoutAction=none So I assumed I need to create a file with service settings under /etc/systemd/system, but each time I do that, my service is not executing at all and disappears from chkconfig --list. When I run systemctl list-unit-files, in addition, I can see my service status is set to masked. But when I delete the file from /etc/systemd/system everything is back to normal. Could anyone explain me how can I customize the startup for my service? Best regards Fr.
Setting timeout for custom service (chkconfig) I wrote a custom script that I want to be executed each time my server starts and stops. I added the script with chkconfig: chkconfig --add myservice This is working fine. But from time to time I run into timeout. I check the systemctl settings and I can see the timeout is set to 5 min: systemctl show myservice.service | grep Timeout TimeoutStartUSec=5min TimeoutStopUSec=5min JobTimeoutUSec=0 JobTimeoutAction=none So I assumed I need to create a file with service settings under /etc/systemd/system, but each time I do that, my service is not executing at all and disappears from chkconfig --list. When I run systemctl list-unit-files, in addition, I can see my service status is set to masked. But when I delete the file from /etc/systemd/system everything is back to normal. Could anyone explain me how can I customize the startup for my service? Best regards Fr.
linux, redhat, systemctl
3
1,268
0
https://stackoverflow.com/questions/45051952/setting-timeout-for-custom-service-chkconfig
44,684,848
error: zlib library and headers are required R on HPC
System: Red Hat Enterprise Linux Server release 6.5 (Santiago) I’ve installed zlib 1.2.11 on the home folder of a Red Hat HPC as part of the process for installing R base 3.4.0. I get this error even after successful install of zlib checking for inflateInit2_ in -lz... no checking whether zlib support suffices... configure: error: zlib library and headers are required I’ve checked R documentation and configure file for the issue of R requiring versions newer than 1.2.6 but not lexicographically recognizing 1.2.11 as >1.2.6, and that particular bug was patched in R 3.4. I've reviewed this question posted previously and the response is not relevant due to R 3.4 resolving that issue. Any suggestion and/or input would be much appreciated.
error: zlib library and headers are required R on HPC System: Red Hat Enterprise Linux Server release 6.5 (Santiago) I’ve installed zlib 1.2.11 on the home folder of a Red Hat HPC as part of the process for installing R base 3.4.0. I get this error even after successful install of zlib checking for inflateInit2_ in -lz... no checking whether zlib support suffices... configure: error: zlib library and headers are required I’ve checked R documentation and configure file for the issue of R requiring versions newer than 1.2.6 but not lexicographically recognizing 1.2.11 as >1.2.6, and that particular bug was patched in R 3.4. I've reviewed this question posted previously and the response is not relevant due to R 3.4 resolving that issue. Any suggestion and/or input would be much appreciated.
r, compilation, redhat, zlib, configure
3
2,762
0
https://stackoverflow.com/questions/44684848/error-zlib-library-and-headers-are-required-r-on-hpc
43,719,304
Yum fails with - There are no enabled repos
I was trying to install yum on my redhat 7, to install the required packages I used The error message is 'There are no enabled repos. Run "yum repolist all" to see the repos you have. You can enable repos with yum-config-manager --enable ' How can I fix this??
Yum fails with - There are no enabled repos I was trying to install yum on my redhat 7, to install the required packages I used The error message is 'There are no enabled repos. Run "yum repolist all" to see the repos you have. You can enable repos with yum-config-manager --enable ' How can I fix this??
linux, redhat
3
11,109
0
https://stackoverflow.com/questions/43719304/yum-fails-with-there-are-no-enabled-repos
43,595,737
Terraform vsphere network interface bond0 configuration
I have a Terraform config that I want to use to create VMs from a Vsphere template (Redhat 7) but I need to be able to specify the network interface to apply the customizations (static IP, Subnet, Gateway, DNS). provider "vsphere" { user = "${var.vsphere_user}" password = "${var.vsphere_password}" vsphere_server = "${var.vsphere_server}" allow_unverified_ssl = true } resource "vsphere_virtual_machine" "vm1" { name = "vm1" folder = "${var.vsphere_folder}" vcpu = 2 memory = 32768 datacenter = "dc1" cluster = "cluster1" skip_customization = false disk { template = "${var.vsphere_folder}/${var.template_redhat}" datastore = "${var.template_datastore}" type = "thin" } network_interface { label = "${var.vlan}" ipv4_address = "10.1.1.1" ipv4_prefix_length = 16 ipv4_gateway = "10.1.1.254" } dns_servers = ["10.1.1.254"] time_zone = "004" } I want to apply the static IP to bond0 instead of eth0, is this possible to do in Terraform? Thanks.
Terraform vsphere network interface bond0 configuration I have a Terraform config that I want to use to create VMs from a Vsphere template (Redhat 7) but I need to be able to specify the network interface to apply the customizations (static IP, Subnet, Gateway, DNS). provider "vsphere" { user = "${var.vsphere_user}" password = "${var.vsphere_password}" vsphere_server = "${var.vsphere_server}" allow_unverified_ssl = true } resource "vsphere_virtual_machine" "vm1" { name = "vm1" folder = "${var.vsphere_folder}" vcpu = 2 memory = 32768 datacenter = "dc1" cluster = "cluster1" skip_customization = false disk { template = "${var.vsphere_folder}/${var.template_redhat}" datastore = "${var.template_datastore}" type = "thin" } network_interface { label = "${var.vlan}" ipv4_address = "10.1.1.1" ipv4_prefix_length = 16 ipv4_gateway = "10.1.1.254" } dns_servers = ["10.1.1.254"] time_zone = "004" } I want to apply the static IP to bond0 instead of eth0, is this possible to do in Terraform? Thanks.
redhat, vsphere, terraform
3
285
0
https://stackoverflow.com/questions/43595737/terraform-vsphere-network-interface-bond0-configuration
43,332,684
OpenShift/Origin API call to initiate a deployment
Hi :) I'm trying to mimic the oc cli API call to the master node that initiates a deployment. So eventually, I can have a chatbot that can initiate a deployment without needing to install the oc cli. What is the API call to initiate a deployment? When I look at what the oc cli is doing with oc deploy <app> --latest --loglevel=9 . I see it fetching information only: curl -k -v -XGET -H "Authorization: Bearer <token>" -H "User-Agent: oc/v1.3.0 (darwin/amd64) openshift/d451518" -H "Accept: application/json, */*" [URL] curl -k -v -XGET -H "User-Agent: oc/v1.3.0+52492b4 (darwin/amd64) kubernetes/52492b4" -H "Authorization: Bearer <token>" -H "Accept: application/json, */*" [URL] Where does it make the call to initiate the deployment? And how do I mimic it? I wasn't able to find anything in these docs: [URL] [URL] Thank you for your time!
OpenShift/Origin API call to initiate a deployment Hi :) I'm trying to mimic the oc cli API call to the master node that initiates a deployment. So eventually, I can have a chatbot that can initiate a deployment without needing to install the oc cli. What is the API call to initiate a deployment? When I look at what the oc cli is doing with oc deploy <app> --latest --loglevel=9 . I see it fetching information only: curl -k -v -XGET -H "Authorization: Bearer <token>" -H "User-Agent: oc/v1.3.0 (darwin/amd64) openshift/d451518" -H "Accept: application/json, */*" [URL] curl -k -v -XGET -H "User-Agent: oc/v1.3.0+52492b4 (darwin/amd64) kubernetes/52492b4" -H "Authorization: Bearer <token>" -H "Accept: application/json, */*" [URL] Where does it make the call to initiate the deployment? And how do I mimic it? I wasn't able to find anything in these docs: [URL] [URL] Thank you for your time!
openshift, redhat, openshift-origin, redhat-containers
3
226
1
https://stackoverflow.com/questions/43332684/openshift-origin-api-call-to-initiate-a-deployment
37,579,834
Max thread ID on Linux?
Is there a maximum C++ thread ID on Linux? So I'm using std::thread and I call get_id(), is there an upper bound on what the number could be? I thought I had found 2^16 but that seemed to be the max number of Linux processes? This would be a red-hat distro.
Max thread ID on Linux? Is there a maximum C++ thread ID on Linux? So I'm using std::thread and I call get_id(), is there an upper bound on what the number could be? I thought I had found 2^16 but that seemed to be the max number of Linux processes? This would be a red-hat distro.
linux, unix, c++11, redhat, rhel
3
884
0
https://stackoverflow.com/questions/37579834/max-thread-id-on-linux
37,296,228
WLP: Use both private truststore and server provided truststore
Platform RedHat Enterprise Linux 7 WebSphere Liberty Profile 8.5.5.8 Issue I have several Liberty instances / applications connected to a Liberty Collective Controller, and therefore have ssl and keystores specific to each instance. At the same time many of the applications connect externally / outbound to different https:// and are in the need of storing root certificates from Commodo, Buypass, Thawte, etc. to avoid The signer might need to be added to local trust store and could not build a valid CertPath , etc. Goal Use server (Java / RedHat) provided CA root certificate stores unchanged, and use a "pr-instance" truststore where private certificates are imported - in combination. Question Is it possible to combine a "personal" truststore with a server provided truststore (or two), i.e. from the Java installed /opt/Liberty/java/java_1.8_64/jre/lib/security/cacerts file or the RPM package ca-certificates And if so - how? My current ssl configuration looks like this: <!-- Connection to the collective controller --> <collectiveMember controllerHost="<server>" controllerPort="<port>" /> <!-- clientAuthenticationSupported set to enable bidirectional trust --> <ssl id="defaultSSLConfig" keyStoreRef="defaultKeyStore" trustStoreRef="defaultTrustStore" clientAuthenticationSupported="true" /> <!-- inbound (HTTPS) keystore --> <keyStore id="defaultKeyStore" password="******" location="${server.config.dir}/resources/security/key.jks" /> <!-- inbound (HTTPS) truststore --> <keyStore id="defaultTrustStore" password="*****" location="${server.config.dir}/resources/security/trust.jks" /> <!-- server identity keystore --> <keyStore id="serverIdentity" password="******" location="${server.config.dir}/resources/collective/serverIdentity.jks" /> <!-- collective truststore --> <keyStore id="collectiveTrust" password="*******" location="${server.config.dir}/resources/collective/collectiveTrust.jks" />
WLP: Use both private truststore and server provided truststore Platform RedHat Enterprise Linux 7 WebSphere Liberty Profile 8.5.5.8 Issue I have several Liberty instances / applications connected to a Liberty Collective Controller, and therefore have ssl and keystores specific to each instance. At the same time many of the applications connect externally / outbound to different https:// and are in the need of storing root certificates from Commodo, Buypass, Thawte, etc. to avoid The signer might need to be added to local trust store and could not build a valid CertPath , etc. Goal Use server (Java / RedHat) provided CA root certificate stores unchanged, and use a "pr-instance" truststore where private certificates are imported - in combination. Question Is it possible to combine a "personal" truststore with a server provided truststore (or two), i.e. from the Java installed /opt/Liberty/java/java_1.8_64/jre/lib/security/cacerts file or the RPM package ca-certificates And if so - how? My current ssl configuration looks like this: <!-- Connection to the collective controller --> <collectiveMember controllerHost="<server>" controllerPort="<port>" /> <!-- clientAuthenticationSupported set to enable bidirectional trust --> <ssl id="defaultSSLConfig" keyStoreRef="defaultKeyStore" trustStoreRef="defaultTrustStore" clientAuthenticationSupported="true" /> <!-- inbound (HTTPS) keystore --> <keyStore id="defaultKeyStore" password="******" location="${server.config.dir}/resources/security/key.jks" /> <!-- inbound (HTTPS) truststore --> <keyStore id="defaultTrustStore" password="*****" location="${server.config.dir}/resources/security/trust.jks" /> <!-- server identity keystore --> <keyStore id="serverIdentity" password="******" location="${server.config.dir}/resources/collective/serverIdentity.jks" /> <!-- collective truststore --> <keyStore id="collectiveTrust" password="*******" location="${server.config.dir}/resources/collective/collectiveTrust.jks" />
java, redhat, websphere-liberty, rhel7
3
687
0
https://stackoverflow.com/questions/37296228/wlp-use-both-private-truststore-and-server-provided-truststore
34,850,312
Use of PTPd on RedHat/CentOS
I need to create a reliable and accurate synchronization between two CentOS 6 machines connected through a direct Ethernet connection. I've seen that on Linux several implementation of the IEEE 1588 Precision Time Protocol (PTP) exist: PTPd : Apparently, this is the original implentation Source code available on GitHub (appparently, still maintained almost unmaintained) PTPd2 : A new version meant to supersede the previous implementation Apparently unmaintained For CentOS 6, available only in the EPEL repositories PTPv2d : A further implementation Unmaintained as well linuxptp : A specific implementation for Linux Maintained Available on the CentOS repositories Suggested by the RedHat documentation for both RedHat 6 and RedHat 7 My questions follow: Why does the RedHat documentation suggest the use of linuxptp for RedHat 6 (based on Linux kernel 2.6) despite the linuxptp documentation says that a Linux kernel version 3.0 or newer is needed ? Which are differences between PTPd2 and Linuxptp in terms of reliability and timing accuracy ? Which one should I prefer on CentOS 6 and on CentOS 7, respectively ? Why either PTPd2 and Linuxptp do not synchronize immediately and often need me to start/stop the service several times or manually change system time through date to make the machine synchronize ?
Use of PTPd on RedHat/CentOS I need to create a reliable and accurate synchronization between two CentOS 6 machines connected through a direct Ethernet connection. I've seen that on Linux several implementation of the IEEE 1588 Precision Time Protocol (PTP) exist: PTPd : Apparently, this is the original implentation Source code available on GitHub (appparently, still maintained almost unmaintained) PTPd2 : A new version meant to supersede the previous implementation Apparently unmaintained For CentOS 6, available only in the EPEL repositories PTPv2d : A further implementation Unmaintained as well linuxptp : A specific implementation for Linux Maintained Available on the CentOS repositories Suggested by the RedHat documentation for both RedHat 6 and RedHat 7 My questions follow: Why does the RedHat documentation suggest the use of linuxptp for RedHat 6 (based on Linux kernel 2.6) despite the linuxptp documentation says that a Linux kernel version 3.0 or newer is needed ? Which are differences between PTPd2 and Linuxptp in terms of reliability and timing accuracy ? Which one should I prefer on CentOS 6 and on CentOS 7, respectively ? Why either PTPd2 and Linuxptp do not synchronize immediately and often need me to start/stop the service several times or manually change system time through date to make the machine synchronize ?
linux, timestamp, redhat, centos6, ptpd
3
1,905
1
https://stackoverflow.com/questions/34850312/use-of-ptpd-on-redhat-centos
33,659,965
Character Encoding in Visual Studio Output Window
I'm trying to properly format the text inside the Visual Studio 2010 output window. I've research a few options and none of them work, either because I couldn't figure out how to implement a solution or the attempted solution didn't work. The problem Text encoding in the Visual Studio Output Window seems to 'break' versus output in a DOS prompt or Cygwin prompt. How can I correct this for Visual Studio 2010? Example of problem in Output window: 1> ./singleCDL.h: In function ΓÇÿstatus checks::IsFileEDL(char*)ΓÇÖ: 1> ./singleCDL.h:360: warning: deprecated conversion from string constant to ΓÇÿchar*ΓÇÖ 1> ./singleCDL.h: In function ΓÇÿvoid checks::LogSparksUsage()ΓÇÖ: 1> ./singleCDL.h:400: warning: deprecated conversion from string constant to ΓÇÿchar*ΓÇÖ 1> ./CBSD_EDL_to_CDL.C: At global scope: 1> ./CBSD_EDL_to_CDL.C:135: warning: deprecated conversion from string constant to ΓÇÿchar*ΓÇÖ 1> ./CBSD_EDL_to_CDL.C:137: warning: deprecated conversion from string constant to ΓÇÿchar*ΓÇÖ The exact same output in DOS & Cygwin: ./singleCDL.h: In function ‘status checks::IsFileEDL(char*)’: ./singleCDL.h:360: warning: deprecated conversion from string constant to ‘char*’ ./singleCDL.h: In function ‘void checks::LogSparksUsage()’: ./singleCDL.h:400: warning: deprecated conversion from string constant to ‘char*’ ./CBSD_EDL_to_CDL.C: At global scope: ./CBSD_EDL_to_CDL.C:134: warning: deprecated conversion from string constant to ‘char*’ Attempted Solutions There are two methods people have discussed to correct this (in as much as I could find), and one I tried through additional research: Change the font in the window - but no font selection changed the incorrect display of the single quote character Change Encoding.GetEncoding() - but had no idea how to enact this change Adding chcp in my build.bat file, by making it the first line but no change in Output Window display Possible - and as yet Untested - Solutions Upgrade to VStudio 2013 or 2015. Problems with this? Possible time to re-implement build solution (if even necessary). Also... I'd rather not have to change software. Pipe remote host compilation output to additional build step, on local or remote host, using python, to attempt more controlled character encoding translations. Problems? Lengthy solution that adds more machinery to the build process. Background I'm writing code in my preferred IDE, Visual Studio (current version 2010). The code is being compiled on a remote linux host: RedHat Enterprise Linux Workstation Release 6.2 (Santiago) . I've set up a custom build tool for my project such that it copies the files to the remote host and then compiles on that computer. I'm using command line ssh to perform the file copies and remote compilation. VS2010 performs these actions through a simple build.bat file; this is the custom build tool. In order to get ssh to run as command line, I've added cygwin/bin to the 'Executable Directories' environment variable list in the project property page, in VS2010.
Character Encoding in Visual Studio Output Window I'm trying to properly format the text inside the Visual Studio 2010 output window. I've research a few options and none of them work, either because I couldn't figure out how to implement a solution or the attempted solution didn't work. The problem Text encoding in the Visual Studio Output Window seems to 'break' versus output in a DOS prompt or Cygwin prompt. How can I correct this for Visual Studio 2010? Example of problem in Output window: 1> ./singleCDL.h: In function ΓÇÿstatus checks::IsFileEDL(char*)ΓÇÖ: 1> ./singleCDL.h:360: warning: deprecated conversion from string constant to ΓÇÿchar*ΓÇÖ 1> ./singleCDL.h: In function ΓÇÿvoid checks::LogSparksUsage()ΓÇÖ: 1> ./singleCDL.h:400: warning: deprecated conversion from string constant to ΓÇÿchar*ΓÇÖ 1> ./CBSD_EDL_to_CDL.C: At global scope: 1> ./CBSD_EDL_to_CDL.C:135: warning: deprecated conversion from string constant to ΓÇÿchar*ΓÇÖ 1> ./CBSD_EDL_to_CDL.C:137: warning: deprecated conversion from string constant to ΓÇÿchar*ΓÇÖ The exact same output in DOS & Cygwin: ./singleCDL.h: In function ‘status checks::IsFileEDL(char*)’: ./singleCDL.h:360: warning: deprecated conversion from string constant to ‘char*’ ./singleCDL.h: In function ‘void checks::LogSparksUsage()’: ./singleCDL.h:400: warning: deprecated conversion from string constant to ‘char*’ ./CBSD_EDL_to_CDL.C: At global scope: ./CBSD_EDL_to_CDL.C:134: warning: deprecated conversion from string constant to ‘char*’ Attempted Solutions There are two methods people have discussed to correct this (in as much as I could find), and one I tried through additional research: Change the font in the window - but no font selection changed the incorrect display of the single quote character Change Encoding.GetEncoding() - but had no idea how to enact this change Adding chcp in my build.bat file, by making it the first line but no change in Output Window display Possible - and as yet Untested - Solutions Upgrade to VStudio 2013 or 2015. Problems with this? Possible time to re-implement build solution (if even necessary). Also... I'd rather not have to change software. Pipe remote host compilation output to additional build step, on local or remote host, using python, to attempt more controlled character encoding translations. Problems? Lengthy solution that adds more machinery to the build process. Background I'm writing code in my preferred IDE, Visual Studio (current version 2010). The code is being compiled on a remote linux host: RedHat Enterprise Linux Workstation Release 6.2 (Santiago) . I've set up a custom build tool for my project such that it copies the files to the remote host and then compiles on that computer. I'm using command line ssh to perform the file copies and remote compilation. VS2010 performs these actions through a simple build.bat file; this is the custom build tool. In order to get ssh to run as command line, I've added cygwin/bin to the 'Executable Directories' environment variable list in the project property page, in VS2010.
visual-studio-2010, ssh, character-encoding, cygwin, redhat
3
2,195
0
https://stackoverflow.com/questions/33659965/character-encoding-in-visual-studio-output-window
25,904,656
How to set Neo4j auto start when booting?
I want to start my Neo4j service when booting, and my system environment is Redhat . I add below text on /etc/rc.d/rc.local , but it is not working /opt/neo4j/bin/neo4j start But it works for MongoDB... /opt/mongodb/bin/mongod
How to set Neo4j auto start when booting? I want to start my Neo4j service when booting, and my system environment is Redhat . I add below text on /etc/rc.d/rc.local , but it is not working /opt/neo4j/bin/neo4j start But it works for MongoDB... /opt/mongodb/bin/mongod
neo4j, operating-system, redhat
3
2,974
1
https://stackoverflow.com/questions/25904656/how-to-set-neo4j-auto-start-when-booting
24,444,665
hdfs group permission doesn&#39;t work
I'm using Hadoop 2.2.0 and found that hdfs group permission configs don't work like the linux filesystem $hadoop fs -ls /user drwxrwx--- - data data 0 2014-06-27 11:18 /user/data $whoami raw $groups raw data this directory belongs to the user data and the group data . Then when another user raw , who is the member of the group data , tries to list the directory /user/data on hdfs the following exception is raised: ls: Permission denied: user=langxian.chen, access=READ_EXECUTE, inode="/user/data":data:data:drwxrwx--- any idea why?
hdfs group permission doesn&#39;t work I'm using Hadoop 2.2.0 and found that hdfs group permission configs don't work like the linux filesystem $hadoop fs -ls /user drwxrwx--- - data data 0 2014-06-27 11:18 /user/data $whoami raw $groups raw data this directory belongs to the user data and the group data . Then when another user raw , who is the member of the group data , tries to list the directory /user/data on hdfs the following exception is raised: ls: Permission denied: user=langxian.chen, access=READ_EXECUTE, inode="/user/data":data:data:drwxrwx--- any idea why?
hadoop, permissions, hdfs, redhat, cloudera-cdh
3
488
0
https://stackoverflow.com/questions/24444665/hdfs-group-permission-doesnt-work
24,420,565
Is Redhat moving away from HornetQ to ActiveMQ?
HornetQ, the open source Message implementation created by RedHat is promoted as part of RedHat's JBoss Application Server. HornetQ was a strong contender for ApacheMQ when it was initially launched. I was going through JBoss Fuse documentation and realized that "Active MQ" is used as Messaging technology. (I know that JBoss Fuse is developed on Apache Camel) My question is, "Is RedHat Moving away from HornetQ? or It will use both implementations in their products?". I have not seen any official announcement about replacing "HornetQ" with "ActiveMQ", but wanted to find out their steps. I have been using "HorntQ" as it is native to JBoss, because of the arrival of many new components to my project, I am looking to adopt Jboss Fuse as an ESB. And observed the major difference in messaging systems. As we used JMS, I hope there will not be a major changes required for my Queue/Factory/Messaging related config. For any new applications developed using JBoss, is it recommended to use ActiveMQ than HornetQ?
Is Redhat moving away from HornetQ to ActiveMQ? HornetQ, the open source Message implementation created by RedHat is promoted as part of RedHat's JBoss Application Server. HornetQ was a strong contender for ApacheMQ when it was initially launched. I was going through JBoss Fuse documentation and realized that "Active MQ" is used as Messaging technology. (I know that JBoss Fuse is developed on Apache Camel) My question is, "Is RedHat Moving away from HornetQ? or It will use both implementations in their products?". I have not seen any official announcement about replacing "HornetQ" with "ActiveMQ", but wanted to find out their steps. I have been using "HorntQ" as it is native to JBoss, because of the arrival of many new components to my project, I am looking to adopt Jboss Fuse as an ESB. And observed the major difference in messaging systems. As we used JMS, I hope there will not be a major changes required for my Queue/Factory/Messaging related config. For any new applications developed using JBoss, is it recommended to use ActiveMQ than HornetQ?
jboss, activemq-classic, redhat, hornetq
3
611
0
https://stackoverflow.com/questions/24420565/is-redhat-moving-away-from-hornetq-to-activemq
24,018,697
Bug? Cannot set persistent booleans without managed policy
I have an installation of Magneto, and it couldn't send any emails. Upon investigation, httpd_can_sendmail was turned off. This can be shown by getsebool -a | grep mail . First I tried setsebool -P httpd_can_sendmail on , which gave me an error Cannot set persistent booleans without managed policy . Then I read this article , and it's saying this is a bug and that it should really complain that you need root privileges. So sudo setsebool -P httpd_can_sendmail on turned it on.. The bug report is 4 years old, and this site is on Red Hat Enterprise Linux Server release 6.5 (Santiago) hosted on AWS. Is this error message simply just mis-worded? Should I have run that command as root ?
Bug? Cannot set persistent booleans without managed policy I have an installation of Magneto, and it couldn't send any emails. Upon investigation, httpd_can_sendmail was turned off. This can be shown by getsebool -a | grep mail . First I tried setsebool -P httpd_can_sendmail on , which gave me an error Cannot set persistent booleans without managed policy . Then I read this article , and it's saying this is a bug and that it should really complain that you need root privileges. So sudo setsebool -P httpd_can_sendmail on turned it on.. The bug report is 4 years old, and this site is on Red Hat Enterprise Linux Server release 6.5 (Santiago) hosted on AWS. Is this error message simply just mis-worded? Should I have run that command as root ?
apache, email, redhat
3
12,441
2
https://stackoverflow.com/questions/24018697/bug-cannot-set-persistent-booleans-without-managed-policy
20,685,975
How to specify dependency location in rpm?
While installing Mono using RPM, GLIBC_2.16 is listed as a dependency. Since I'm having an older version of glibc, and didn't want to corrupt my kernel, i installed the newer glibc from sources in my home folder. I now want the RPM to refer to this newer glibc lib directory in my home folder while installing mono. What is the RPM option for mentioning dependency locations for a package? I am currently using the following RPM command: sudo rpm -ivh mono-core-3.2.3-0.x86_64.rpm I get the following error messages: libc.so.6(GLIBC_2.14)(64bit) is needed by mono-core-3.2.3-0.x86_64 libc.so.6(GLIBC_2.15)(64bit) is needed by mono-core-3.2.3-0.x86_64 libc.so.6(GLIBC_2.16)(64bit) is needed by mono-core-3.2.3-0.x86_64 My newer glibc path is: ~/Desktop/glibc/glibc1/lib What option should i include in rpm to reference this path while installing mono? Thanks
How to specify dependency location in rpm? While installing Mono using RPM, GLIBC_2.16 is listed as a dependency. Since I'm having an older version of glibc, and didn't want to corrupt my kernel, i installed the newer glibc from sources in my home folder. I now want the RPM to refer to this newer glibc lib directory in my home folder while installing mono. What is the RPM option for mentioning dependency locations for a package? I am currently using the following RPM command: sudo rpm -ivh mono-core-3.2.3-0.x86_64.rpm I get the following error messages: libc.so.6(GLIBC_2.14)(64bit) is needed by mono-core-3.2.3-0.x86_64 libc.so.6(GLIBC_2.15)(64bit) is needed by mono-core-3.2.3-0.x86_64 libc.so.6(GLIBC_2.16)(64bit) is needed by mono-core-3.2.3-0.x86_64 My newer glibc path is: ~/Desktop/glibc/glibc1/lib What option should i include in rpm to reference this path while installing mono? Thanks
linux, mono, redhat, glibc, rpm
3
2,084
1
https://stackoverflow.com/questions/20685975/how-to-specify-dependency-location-in-rpm
12,993,772
Port a debian package to YUM for CentOS
I have a project that runs on Debian and uses many packages provided from the Debian repositories. Because of demand, I've looked into porting the project to CentOS, but found that many of the packages I require are completely missing - at least 10 dependencies would have to be compiled manually at install time on the users machine. My question is, what is the best way to create an installer for the user's machine? Should I use automake tools (with the standard ./configure, make, make install), to compile the required libraries, or is this a non-standard approach. Note that my app doesn't actually need to be compiled since it is written in Python, so is it weird to do a "make", when you're not compiling your own app? Should the configure script just warn the user that package X is missing, and let them handle the rest? Should I roll my own dependency checker by runng pkg-config manually a few times for each library required, and exit if something is missing? I'm quite new to this, so any tips to get me moving in the right direction are appreciated. Edit: I am familiar with RPM and yum for red hat base distros, but CentOS is missing many multimedia packages that I require. An example of one of my package dependencies is "liquidsoap" which is a programmable audio engine: [URL] This is available on Debian, but not Redhat/Centos
Port a debian package to YUM for CentOS I have a project that runs on Debian and uses many packages provided from the Debian repositories. Because of demand, I've looked into porting the project to CentOS, but found that many of the packages I require are completely missing - at least 10 dependencies would have to be compiled manually at install time on the users machine. My question is, what is the best way to create an installer for the user's machine? Should I use automake tools (with the standard ./configure, make, make install), to compile the required libraries, or is this a non-standard approach. Note that my app doesn't actually need to be compiled since it is written in Python, so is it weird to do a "make", when you're not compiling your own app? Should the configure script just warn the user that package X is missing, and let them handle the rest? Should I roll my own dependency checker by runng pkg-config manually a few times for each library required, and exit if something is missing? I'm quite new to this, so any tips to get me moving in the right direction are appreciated. Edit: I am familiar with RPM and yum for red hat base distros, but CentOS is missing many multimedia packages that I require. An example of one of my package dependencies is "liquidsoap" which is a programmable audio engine: [URL] This is available on Debian, but not Redhat/Centos
linux, centos, debian, redhat, dependency-management
3
7,322
2
https://stackoverflow.com/questions/12993772/port-a-debian-package-to-yum-for-centos
12,188,275
SSH connection closed after first password attempt
So, I'm having a rather weird problem. I have a server, that when I try to SSH into, immediately closes the connection if I type in the correct password on the first attempt. However, if I purposefully enter a wrong password on the first attempt, and then enter a correct password at the second or third prompt, it successfully logs me into the computer. Similarly, when I try to use public key authentication, I get an immediate closed connection. If, however, I enter a wrong password for my key file, followed by another wrong password once it reverts to password authentication, I can successfully log in as long as I provide the correct password at the second or third prompt. The machine is running Red Hat Enterprise Linux Server release 6.2 (Santiago), and is using LDAP for authentication. Any ideas on where to start debugging this one?
SSH connection closed after first password attempt So, I'm having a rather weird problem. I have a server, that when I try to SSH into, immediately closes the connection if I type in the correct password on the first attempt. However, if I purposefully enter a wrong password on the first attempt, and then enter a correct password at the second or third prompt, it successfully logs me into the computer. Similarly, when I try to use public key authentication, I get an immediate closed connection. If, however, I enter a wrong password for my key file, followed by another wrong password once it reverts to password authentication, I can successfully log in as long as I provide the correct password at the second or third prompt. The machine is running Red Hat Enterprise Linux Server release 6.2 (Santiago), and is using LDAP for authentication. Any ideas on where to start debugging this one?
ssh, ldap, redhat
3
881
0
https://stackoverflow.com/questions/12188275/ssh-connection-closed-after-first-password-attempt
11,959,347
How to modify PATH for non-interactive SSH call in RHEL 5?
I am trying to modify the PATH variable of my SSH server such at a non-interactive shell command ssh myserver.com 'echo $PATH' returns the desired path. I tried modifying ~/.bashrc and ~/.profile files but they only modify PATH for when I log in to the server interactively, i.e. ssh myserver.com . Can I change this behavior in RHEL5?
How to modify PATH for non-interactive SSH call in RHEL 5? I am trying to modify the PATH variable of my SSH server such at a non-interactive shell command ssh myserver.com 'echo $PATH' returns the desired path. I tried modifying ~/.bashrc and ~/.profile files but they only modify PATH for when I log in to the server interactively, i.e. ssh myserver.com . Can I change this behavior in RHEL5?
unix, ssh, redhat, rhel5
3
692
1
https://stackoverflow.com/questions/11959347/how-to-modify-path-for-non-interactive-ssh-call-in-rhel-5
11,364,823
leveldbjni IO error LOCK: No such file or directory
We're trying to use leveldbjni on redhat machines. It worked flawlessly on ubuntu. But on redhat it gives the following error: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: <file location>/LOCK: No such file or directory We tried to create the leveldb C++ library on redhat (using instructions at [URL] ) and still got the same issue.
leveldbjni IO error LOCK: No such file or directory We're trying to use leveldbjni on redhat machines. It worked flawlessly on ubuntu. But on redhat it gives the following error: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: <file location>/LOCK: No such file or directory We tried to create the leveldb C++ library on redhat (using instructions at [URL] ) and still got the same issue.
java, redhat, leveldb
3
2,331
0
https://stackoverflow.com/questions/11364823/leveldbjni-io-error-lock-no-such-file-or-directory
9,528,869
Java EE install through ssh on a Linux AMI
I want to install Java EE 6 on a RedHat machine. The machine is actually an AWS AMI. I have installed JDK succesfully but when I try to install Java EE, the console tells me I hace to set the DISPLAY environment variable. I have googled for a while and found that Java EE can only be installed with an X server running(hence the DISPLAY variable). I have no idea how to install this as a linux AMI doesn't have a X Window environment (correct me and illustrate me if I'm wrong). How can I get through with this? Thanks P.D: I set the DISPLAY variable just to see what happens with no luck ... at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.openinstaller.core.EngineBootstrap.main(EngineBootstrap.java:208) SEVERE INTERNAL ERROR: Can't connect to X11 window server using '10.98.135.210:0.0' as the value of the DISPLAY variable.
Java EE install through ssh on a Linux AMI I want to install Java EE 6 on a RedHat machine. The machine is actually an AWS AMI. I have installed JDK succesfully but when I try to install Java EE, the console tells me I hace to set the DISPLAY environment variable. I have googled for a while and found that Java EE can only be installed with an X server running(hence the DISPLAY variable). I have no idea how to install this as a linux AMI doesn't have a X Window environment (correct me and illustrate me if I'm wrong). How can I get through with this? Thanks P.D: I set the DISPLAY variable just to see what happens with no luck ... at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.openinstaller.core.EngineBootstrap.main(EngineBootstrap.java:208) SEVERE INTERNAL ERROR: Can't connect to X11 window server using '10.98.135.210:0.0' as the value of the DISPLAY variable.
linux, jakarta-ee, installation, redhat, amazon-ami
3
501
0
https://stackoverflow.com/questions/9528869/java-ee-install-through-ssh-on-a-linux-ami
8,086,610
Porting Motif, From AIX to RHEL 6.1
This is my question here, but it seemed a better place then on motifzone -- their last post was over a year ago. I am tasked with porting a ~150k line application from AIX 5.3L to RHEL 6.1 I am running Motif 2.1 on AIX, and OpenMotif 2.1.32(same build?) on Redhat. I have managed to get the makefile going, and am able to build/link just fine. When I try to run it, I get the errors: Warning: Cannot find callback list in XTAddCallback Error: DialogShell widget supports only one RectObj child. I realize that these immediately point to wrong parameters in the calls, but I am unable to figure out where things might be going wrong. Nothing has been changed in any motif code during the port, so I can assume that this is a redhat or motif version problem. Can anyone here help me out on what this might be?
Porting Motif, From AIX to RHEL 6.1 This is my question here, but it seemed a better place then on motifzone -- their last post was over a year ago. I am tasked with porting a ~150k line application from AIX 5.3L to RHEL 6.1 I am running Motif 2.1 on AIX, and OpenMotif 2.1.32(same build?) on Redhat. I have managed to get the makefile going, and am able to build/link just fine. When I try to run it, I get the errors: Warning: Cannot find callback list in XTAddCallback Error: DialogShell widget supports only one RectObj child. I realize that these immediately point to wrong parameters in the calls, but I am unable to figure out where things might be going wrong. Nothing has been changed in any motif code during the port, so I can assume that this is a redhat or motif version problem. Can anyone here help me out on what this might be?
makefile, redhat, aix, porting, motif
3
511
0
https://stackoverflow.com/questions/8086610/porting-motif-from-aix-to-rhel-6-1
4,032,443
How to install rails 3.0 on Redhat 4.0 with mysql2 support?
When trying to install rails 3.0 on a redhat 4.0 server, the 'bundle install' fails during the installation of mysql2. Is it possible to solve this? 'bundle install' command returns the following output: ~/rails/trial# bundle install Fetching source index for [URL] Using rake (0.8.7) Using abstract (1.0.0) Using activesupport (3.0.0) Using builder (2.1.2) Using i18n (0.4.1) Using activemodel (3.0.0) Using erubis (2.6.6) Using rack (1.2.1) Using rack-mount (0.6.13) Using rack-test (0.5.6) Using tzinfo (0.3.23) Using actionpack (3.0.0) Using mime-types (1.16) Using polyglot (0.3.1) Using treetop (1.4.8) Using mail (2.2.6.1) Using actionmailer (3.0.0) Using arel (1.0.1) Using activerecord (3.0.0) Using activeresource (3.0.0) Using bundler (1.0.3) Installing mysql2 (0.2.4) with native extensions /usr/local/lib/ruby/1.9.1/rubygems/installer.rb:483:in rescue in block in build_extensions': ERROR: Failed to build gem native extension. (Gem::Installer::ExtensionBuildError) /usr/local/bin/ruby extconf.rb checking for rb_thread_blocking_region()... yes checking for mysql.h... yes checking for errmsg.h... yes checking for mysqld_error.h... yes creating Makefile make gcc -I. -I/usr/local/include/ruby-1.9.1/x86_64-linux -I/usr/local/include/ruby-1.9.1/ruby/backward -I/usr/local/include/ruby-1.9.1 -I. -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_MYSQL_H -DHAVE_ERRMSG_H -DHAVE_MYSQLD_ERROR_H -I/usr/include/mysql -g -pipe -m64 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fPIC -O3 -ggdb -Wall -Wno-unused-parameter -Wno-parentheses -Wpointer-arith -Wwrite-strings -Wno-long-long -Wall -funroll-loops -o client.o -c client.c In file included from ./mysql2_ext.h:29, from client.c:1: ./client.h:41:7: warning: no newline at end of file client.c: In function set_reconnect': client.c:434: error: MYSQL_OPT_RECONNECT' undeclared (first use in this function) client.c:434: error: (Each undeclared identifier is reported only once client.c:434: error: for each function it appears in.) client.c: In function set_connect_timeout': client.c:451: warning: passing arg 3 of mysql_options' from incompatible pointer type make: *** [client.o] Error 1 Gem files will remain installed in /usr/local/lib/ruby/gems/1.9.1/gems/mysql2-0.2.4 for inspection. Results logged to /usr/local/lib/ruby/gems/1.9.1/gems/mysql2-0.2.4/ext/mysql2/gem_make.out from /usr/local/lib/ruby/1.9.1/rubygems/installer.rb:486:in block in build_extensions' from /usr/local/lib/ruby/1.9.1/rubygems/installer.rb:446:in each' from /usr/local/lib/ruby/1.9.1/rubygems/installer.rb:446:in build_extensions' from /usr/local/lib/ruby/1.9.1/rubygems/installer.rb:198:in install' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/source.rb:100:in install' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/installer.rb:55:in block in run' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/spec_set.rb:12:in block in each' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/spec_set.rb:12:in each' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/spec_set.rb:12:in each' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/installer.rb:44:in run' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/installer.rb:8:in install' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/cli.rb:221:in install' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/vendor/thor/task.rb:22:in run' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/vendor/thor/invocation.rb:118:in invoke_task' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/vendor/thor.rb:246:in dispatch' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/vendor/thor/base.rb:389:in start' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/bin/bundle:13:in <top (required)>' from /usr/local/bin/bundle:19:in load' from /usr/local/bin/bundle:19:in <main>'
How to install rails 3.0 on Redhat 4.0 with mysql2 support? When trying to install rails 3.0 on a redhat 4.0 server, the 'bundle install' fails during the installation of mysql2. Is it possible to solve this? 'bundle install' command returns the following output: ~/rails/trial# bundle install Fetching source index for [URL] Using rake (0.8.7) Using abstract (1.0.0) Using activesupport (3.0.0) Using builder (2.1.2) Using i18n (0.4.1) Using activemodel (3.0.0) Using erubis (2.6.6) Using rack (1.2.1) Using rack-mount (0.6.13) Using rack-test (0.5.6) Using tzinfo (0.3.23) Using actionpack (3.0.0) Using mime-types (1.16) Using polyglot (0.3.1) Using treetop (1.4.8) Using mail (2.2.6.1) Using actionmailer (3.0.0) Using arel (1.0.1) Using activerecord (3.0.0) Using activeresource (3.0.0) Using bundler (1.0.3) Installing mysql2 (0.2.4) with native extensions /usr/local/lib/ruby/1.9.1/rubygems/installer.rb:483:in rescue in block in build_extensions': ERROR: Failed to build gem native extension. (Gem::Installer::ExtensionBuildError) /usr/local/bin/ruby extconf.rb checking for rb_thread_blocking_region()... yes checking for mysql.h... yes checking for errmsg.h... yes checking for mysqld_error.h... yes creating Makefile make gcc -I. -I/usr/local/include/ruby-1.9.1/x86_64-linux -I/usr/local/include/ruby-1.9.1/ruby/backward -I/usr/local/include/ruby-1.9.1 -I. -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_MYSQL_H -DHAVE_ERRMSG_H -DHAVE_MYSQLD_ERROR_H -I/usr/include/mysql -g -pipe -m64 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fPIC -O3 -ggdb -Wall -Wno-unused-parameter -Wno-parentheses -Wpointer-arith -Wwrite-strings -Wno-long-long -Wall -funroll-loops -o client.o -c client.c In file included from ./mysql2_ext.h:29, from client.c:1: ./client.h:41:7: warning: no newline at end of file client.c: In function set_reconnect': client.c:434: error: MYSQL_OPT_RECONNECT' undeclared (first use in this function) client.c:434: error: (Each undeclared identifier is reported only once client.c:434: error: for each function it appears in.) client.c: In function set_connect_timeout': client.c:451: warning: passing arg 3 of mysql_options' from incompatible pointer type make: *** [client.o] Error 1 Gem files will remain installed in /usr/local/lib/ruby/gems/1.9.1/gems/mysql2-0.2.4 for inspection. Results logged to /usr/local/lib/ruby/gems/1.9.1/gems/mysql2-0.2.4/ext/mysql2/gem_make.out from /usr/local/lib/ruby/1.9.1/rubygems/installer.rb:486:in block in build_extensions' from /usr/local/lib/ruby/1.9.1/rubygems/installer.rb:446:in each' from /usr/local/lib/ruby/1.9.1/rubygems/installer.rb:446:in build_extensions' from /usr/local/lib/ruby/1.9.1/rubygems/installer.rb:198:in install' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/source.rb:100:in install' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/installer.rb:55:in block in run' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/spec_set.rb:12:in block in each' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/spec_set.rb:12:in each' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/spec_set.rb:12:in each' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/installer.rb:44:in run' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/installer.rb:8:in install' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/cli.rb:221:in install' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/vendor/thor/task.rb:22:in run' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/vendor/thor/invocation.rb:118:in invoke_task' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/vendor/thor.rb:246:in dispatch' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/lib/bundler/vendor/thor/base.rb:389:in start' from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.3/bin/bundle:13:in <top (required)>' from /usr/local/bin/bundle:19:in load' from /usr/local/bin/bundle:19:in <main>'
mysql, ruby-on-rails, redhat
3
903
2
https://stackoverflow.com/questions/4032443/how-to-install-rails-3-0-on-redhat-4-0-with-mysql2-support
1,975,026
Where can I find the source code for Redhat&#39;s nash utility?
Where can I find the source code for Redhat's nash utility? Thanks, Chenz
Where can I find the source code for Redhat&#39;s nash utility? Where can I find the source code for Redhat's nash utility? Thanks, Chenz
redhat, boot, initrd
3
912
2
https://stackoverflow.com/questions/1975026/where-can-i-find-the-source-code-for-redhats-nash-utility
72,520,680
Error: error:060800C8:digital envelope routines:EVP_DigestInit_ex:disabled for FIPS
Facing this error while deploying a react app on openshift using Redhat ubi8-minimal base image.
Error: error:060800C8:digital envelope routines:EVP_DigestInit_ex:disabled for FIPS Facing this error while deploying a react app on openshift using Redhat ubi8-minimal base image.
reactjs, webpack, openshift, redhat, cryptojs
3
4,229
1
https://stackoverflow.com/questions/72520680/error-error060800c8digital-envelope-routinesevp-digestinit-exdisabled-for-f
17,374,790
MongoDB service not starting on RedHat - missing &#39;service_uri&#39;
I'm on a Red Hat Enterprise Linux Client release 5.4 (Tikanga) machine I created my /etc/yum.repos.d/10gen.repo like this: [10gen] name=10gen Repository baseurl=[URL] gpgcheck=0 enabled=1 I installed MongoDB with: sudo yum install mongo-10gen mongo-10gen-server but when I run: sudo service mongod start I get: Missing SERVICE_URI environment variable help! :)
MongoDB service not starting on RedHat - missing &#39;service_uri&#39; I'm on a Red Hat Enterprise Linux Client release 5.4 (Tikanga) machine I created my /etc/yum.repos.d/10gen.repo like this: [10gen] name=10gen Repository baseurl=[URL] gpgcheck=0 enabled=1 I installed MongoDB with: sudo yum install mongo-10gen mongo-10gen-server but when I run: sudo service mongod start I get: Missing SERVICE_URI environment variable help! :)
mongodb, redhat
2
4,223
2
https://stackoverflow.com/questions/17374790/mongodb-service-not-starting-on-redhat-missing-service-uri
64,246,506
&quot;policy&quot; E667: Fsync failed when using vim to edit pcie_aspm/parameters/policy
When I try edit the pcie_aspm/parameters/policy file as root, I receive the below error in my vim file editor? How do I edit this file?
&quot;policy&quot; E667: Fsync failed when using vim to edit pcie_aspm/parameters/policy When I try edit the pcie_aspm/parameters/policy file as root, I receive the below error in my vim file editor? How do I edit this file?
vim, redhat
2
20,215
3
https://stackoverflow.com/questions/64246506/policy-e667-fsync-failed-when-using-vim-to-edit-pcie-aspm-parameters-policy
15,330,775
What does gdate mean in this shell script?
I need to maintain this shell script: export DAYDAY=gdate --date "30 days ago" +"%Y%m%d" if [ -d $TMP/AA/$DAYDAY]; then rm -r $TMP/AA/$DAYDAY fi But I can't run it because it can't find gdate ; this code is to clear the log directory that is exactly 30 days old.
What does gdate mean in this shell script? I need to maintain this shell script: export DAYDAY=gdate --date "30 days ago" +"%Y%m%d" if [ -d $TMP/AA/$DAYDAY]; then rm -r $TMP/AA/$DAYDAY fi But I can't run it because it can't find gdate ; this code is to clear the log directory that is exactly 30 days old.
linux, bash, redhat
2
11,940
2
https://stackoverflow.com/questions/15330775/what-does-gdate-mean-in-this-shell-script
754,139
Is Red Hat&#39;s JBoss EAP a fork of the JBoss AS code you get from JBOSS.org?
Does anyone know if Red Hat has forked the code you download from JBOSS.org? I'm guessing that the answer is "yes", but I'd like to confirm it. I can't pin it down at the Red Hat site, and jboss.org giving me an HTTP 502 right now for some reason. I know that Red Hat owns JBoss. Does that mean that the code they sell in JBoss Developer Studio for $99 a pop is identical to what I can download from JBOSS.org without paying a fee? Or have they forked the for-fee version in some way?
Is Red Hat&#39;s JBoss EAP a fork of the JBoss AS code you get from JBOSS.org? Does anyone know if Red Hat has forked the code you download from JBOSS.org? I'm guessing that the answer is "yes", but I'd like to confirm it. I can't pin it down at the Red Hat site, and jboss.org giving me an HTTP 502 right now for some reason. I know that Red Hat owns JBoss. Does that mean that the code they sell in JBoss Developer Studio for $99 a pop is identical to what I can download from JBOSS.org without paying a fee? Or have they forked the for-fee version in some way?
java, jakarta-ee, jboss, redhat
2
1,200
2
https://stackoverflow.com/questions/754139/is-red-hats-jboss-eap-a-fork-of-the-jboss-as-code-you-get-from-jboss-org
59,363,640
Docker Error: Transaction check error in RED HAT
Good afternoon I am trying to install Docker on a Red Hat 8 and following the tutorial on the page: [URL] I find this error that I can't find with its solution, and it doesn't let me move forward [root@srvdevrma1 ~]# dnf -y install docker-ce --nobest Updating Subscription Management repositories. Last metadata expiration check: 0:29:57 ago on Mon 16 Dec 2019 03:38:50 PM -04. Dependencies resolved. Problem: package docker-ce-3:19.03.5-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed - cannot install the best candidate for the job - package containerd.io-1.2.10-3.2.el7.x86_64 is excluded - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded - package containerd.io-1.2.2-3.el7.x86_64 is excluded - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded ================================================================================================================================================= Package Architecture Version Repository Size ================================================================================================================================================= Installing: docker-ce x86_64 3:18.09.1-3.el7 docker-ce-stable 19 M Installing dependencies: containerd.io x86_64 1.2.0-3.el7 docker-ce-stable 22 M docker-ce-cli x86_64 1:19.03.5-3.el7 docker-ce-stable 39 M libcgroup x86_64 0.41-19.el8 rhel-8-for-x86_64-baseos-rpms 70 k Skipping packages with broken dependencies: docker-ce x86_64 3:19.03.5-3.el7 docker-ce-stable 24 M Transaction Summary ================================================================================================================================================= Install 4 Packages Skip 1 Package Total size: 80 M Installed size: 338 M Downloading Packages: [SKIPPED] containerd.io-1.2.0-3.el7.x86_64.rpm: Already downloaded [SKIPPED] docker-ce-18.09.1-3.el7.x86_64.rpm: Already downloaded [SKIPPED] docker-ce-cli-19.03.5-3.el7.x86_64.rpm: Already downloaded [SKIPPED] libcgroup-0.41-19.el8.x86_64.rpm: Already downloaded Running transaction check Transaction check succeeded. Running transaction test The downloaded packages were saved in cache until the next successful transaction. You can remove cached packages by executing 'dnf clean packages'. **Error: Transaction check error: file /usr/share/man/man1/docker-attach.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-build.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-commit.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-container-prune.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-container.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-cp.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-create.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-diff.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-events.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-exec.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-export.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-history.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-image-prune.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-image.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-images.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-import.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-info.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-inspect.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-kill.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-load.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-login.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-logout.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-logs.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-pause.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-port.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-ps.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-pull.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-push.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-restart.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-rm.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-rmi.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-run.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-save.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-search.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-start.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-stats.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-stop.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-system-df.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-system-prune.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-system.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-tag.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-top.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-unpause.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-version.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-volume-create.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-volume-inspect.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-volume-ls.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-volume-prune.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-volume-rm.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-volume.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-wait.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch** My problem is in the last lines How can this conflict be resolved? I can not decipher which package specifically is that the conflict is
Docker Error: Transaction check error in RED HAT Good afternoon I am trying to install Docker on a Red Hat 8 and following the tutorial on the page: [URL] I find this error that I can't find with its solution, and it doesn't let me move forward [root@srvdevrma1 ~]# dnf -y install docker-ce --nobest Updating Subscription Management repositories. Last metadata expiration check: 0:29:57 ago on Mon 16 Dec 2019 03:38:50 PM -04. Dependencies resolved. Problem: package docker-ce-3:19.03.5-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed - cannot install the best candidate for the job - package containerd.io-1.2.10-3.2.el7.x86_64 is excluded - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded - package containerd.io-1.2.2-3.el7.x86_64 is excluded - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded ================================================================================================================================================= Package Architecture Version Repository Size ================================================================================================================================================= Installing: docker-ce x86_64 3:18.09.1-3.el7 docker-ce-stable 19 M Installing dependencies: containerd.io x86_64 1.2.0-3.el7 docker-ce-stable 22 M docker-ce-cli x86_64 1:19.03.5-3.el7 docker-ce-stable 39 M libcgroup x86_64 0.41-19.el8 rhel-8-for-x86_64-baseos-rpms 70 k Skipping packages with broken dependencies: docker-ce x86_64 3:19.03.5-3.el7 docker-ce-stable 24 M Transaction Summary ================================================================================================================================================= Install 4 Packages Skip 1 Package Total size: 80 M Installed size: 338 M Downloading Packages: [SKIPPED] containerd.io-1.2.0-3.el7.x86_64.rpm: Already downloaded [SKIPPED] docker-ce-18.09.1-3.el7.x86_64.rpm: Already downloaded [SKIPPED] docker-ce-cli-19.03.5-3.el7.x86_64.rpm: Already downloaded [SKIPPED] libcgroup-0.41-19.el8.x86_64.rpm: Already downloaded Running transaction check Transaction check succeeded. Running transaction test The downloaded packages were saved in cache until the next successful transaction. You can remove cached packages by executing 'dnf clean packages'. **Error: Transaction check error: file /usr/share/man/man1/docker-attach.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-build.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-commit.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-container-prune.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-container.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-cp.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-create.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-diff.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-events.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-exec.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-export.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-history.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-image-prune.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-image.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-images.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-import.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-info.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-inspect.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-kill.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-load.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-login.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-logout.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-logs.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-pause.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-port.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-ps.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-pull.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-push.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-restart.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-rm.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-rmi.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-run.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-save.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-search.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-start.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-stats.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-stop.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-system-df.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-system-prune.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-system.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-tag.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-top.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-unpause.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-version.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-volume-create.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-volume-inspect.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-volume-ls.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-volume-prune.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-volume-rm.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-volume.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker-wait.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch file /usr/share/man/man1/docker.1.gz from install of docker-ce-cli-1:19.03.5-3.el7.x86_64 conflicts with file from package podman-manpages-1.4.2-5.module+el8.1.0+4240+893c1ab8.noarch** My problem is in the last lines How can this conflict be resolved? I can not decipher which package specifically is that the conflict is
docker, redhat
2
3,913
2
https://stackoverflow.com/questions/59363640/docker-error-transaction-check-error-in-red-hat
10,078,205
Can&#39;t get LDAP functions to load in PHP
When attempting to use ldap_connect() , I get this error: Fatal error: Call to undefined function ldap_connect() I've recompiled php with the LDAP apache module enabled, and I've edited my php.ini file, too and uncommented: extension=php_ldap.dll I'm on Red Hat Linux, php 5.3.10, apache 2.2. Any ideas? Loaded Apache Modules: (contains *util_ldap*) core mod_authn_file mod_authn_default mod_authz_host mod_authz_groupfile mod_authz_user mod_authz_default mod_auth_basic mod_include mod_filter util_ldap mod_log_config mod_logio mod_env mod_expires mod_headers mod_setenvif mod_version mod_proxy mod_proxy_connect mod_proxy_ftp mod_proxy_http mod_proxy_scgi mod_proxy_ajp mod_proxy_balancer mod_ssl prefork http_core mod_mime mod_status mod_autoindex mod_asis mod_info mod_suexec mod_cgi mod_negotiation mod_dir mod_actions mod_userdir mod_alias mod_rewrite mod_so mod_auth_passthrough mod_bwlimited mod_fpcgid mod_php5 mod_security Apache Protocols: (contains: ldap ) dict, file, ftp, ftps, gopher, http, https, imap, imaps, ldap, ldaps, pop3, pop3s, rtsp, smtp, smtps, telnet, tftp
Can&#39;t get LDAP functions to load in PHP When attempting to use ldap_connect() , I get this error: Fatal error: Call to undefined function ldap_connect() I've recompiled php with the LDAP apache module enabled, and I've edited my php.ini file, too and uncommented: extension=php_ldap.dll I'm on Red Hat Linux, php 5.3.10, apache 2.2. Any ideas? Loaded Apache Modules: (contains *util_ldap*) core mod_authn_file mod_authn_default mod_authz_host mod_authz_groupfile mod_authz_user mod_authz_default mod_auth_basic mod_include mod_filter util_ldap mod_log_config mod_logio mod_env mod_expires mod_headers mod_setenvif mod_version mod_proxy mod_proxy_connect mod_proxy_ftp mod_proxy_http mod_proxy_scgi mod_proxy_ajp mod_proxy_balancer mod_ssl prefork http_core mod_mime mod_status mod_autoindex mod_asis mod_info mod_suexec mod_cgi mod_negotiation mod_dir mod_actions mod_userdir mod_alias mod_rewrite mod_so mod_auth_passthrough mod_bwlimited mod_fpcgid mod_php5 mod_security Apache Protocols: (contains: ldap ) dict, file, ftp, ftps, gopher, http, https, imap, imaps, ldap, ldaps, pop3, pop3s, rtsp, smtp, smtps, telnet, tftp
php, apache, redhat, openldap
2
47,094
4
https://stackoverflow.com/questions/10078205/cant-get-ldap-functions-to-load-in-php
28,449,573
Cannot install python-ldap with pip2.7 on RHEL 6.5 due to many install errors
I am installing python-ldap on a RHEL 6.5 Server. I am on Python 2.7.9. I am using the following command to installl pip2.7 install python-ldap The compilation process fails with lots of errors. Could someone please guide me? The session transcript is at [URL]
Cannot install python-ldap with pip2.7 on RHEL 6.5 due to many install errors I am installing python-ldap on a RHEL 6.5 Server. I am on Python 2.7.9. I am using the following command to installl pip2.7 install python-ldap The compilation process fails with lots of errors. Could someone please guide me? The session transcript is at [URL]
python, python-2.7, redhat, python-ldap
2
2,306
2
https://stackoverflow.com/questions/28449573/cannot-install-python-ldap-with-pip2-7-on-rhel-6-5-due-to-many-install-errors
32,630,485
How to change TeamCity data directory?
How to change data directory path for existing TeamCity server?
How to change TeamCity data directory? How to change data directory path for existing TeamCity server?
linux, teamcity, redhat
2
7,565
2
https://stackoverflow.com/questions/32630485/how-to-change-teamcity-data-directory
25,318,766
gcc failed when pip upgrading pyzmq
I work under CentOS 5.6. And I have both gcc(gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52)) and gcc44(gcc44 (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3)) on /usr/bin/. When I did pip install -U pyzmq, I received the following error message: Downloading/unpacking pyzmq from [URL] Running setup.py egg_info for package pyzmq no previously-included directories found matching 'docs/build' no previously-included directories found matching 'docs/gh-pages' warning: no previously-included files found matching 'bundled/zeromq/src/Makefile*' warning: no previously-included files found matching 'setup.cfg' warning: no previously-included files found matching 'zmq/libzmq*' warning: no previously-included files matching '__pycache__/*' found anywhere in distribution warning: no previously-included files matching '.deps/*' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '*.pyd' found anywhere in distribution warning: no previously-included files matching '.git*' found anywhere in distribution warning: no previously-included files matching '.DS_Store' found anywhere in distribution warning: no previously-included files matching '.mailmap' found anywhere in distribution warning: no previously-included files matching 'Makefile.am' found anywhere in distribution warning: no previously-included files matching 'Makefile.in' found anywhere in distribution Installing collected packages: pyzmq Found existing installation: pyzmq 2.1.11 Uninstalling pyzmq: Successfully uninstalled pyzmq Running setup.py install for pyzmq Using bundled libzmq already have bundled/zeromq already have platform.hpp checking for timer_create ************************************************ ************************************************ cc -c /tmp/timer_createbuFGwC.c -o build/temp.linux-x86_64-2.7/tmp/timer_createbuFGwC.o unable to execute cc: No such file or directory no timer_create, linking librt Using bundled libsodium already have bundled/libsodium already have version.h already have crypto_stream_salsa20.h already have crypto_scalarmult_curve25519.h ************************************************ ************************************************ building 'zmq.libsodium' extension /usr/bin/gcc44 -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DNATIVE_LITTLE_ENDIAN=1 -Ibundled/libsodium/src/libsodium/include -Ibundled/libsodium/src/libsodium/include/sodium -I/opt/python27/include/python2.7 -c buildutils/initlibsodium.c -o build/temp.linux-x86_64-2.7/buildutils/initlibsodium.o buildutils/initlibsodium.c:10:20: error: Python.h: No such file or directory buildutils/initlibsodium.c:12: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘Methods’ buildutils/initlibsodium.c:40: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘initlibzmq’ error: command '/usr/bin/gcc44' failed with exit status 1 Complete output from command /opt/python27/bin/python2.7 -c "import setuptools;__file__='/home/fzeng/build/pyzmq/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-90NjCM-record/install-record.txt: running install running build running build_py running build_ext running configure Using bundled libzmq already have bundled/zeromq already have platform.hpp checking for timer_create ************************************************ ************************************************ cc -c /tmp/timer_createbuFGwC.c -o build/temp.linux-x86_64-2.7/tmp/timer_createbuFGwC.o unable to execute cc: No such file or directory no timer_create, linking librt Using bundled libsodium already have bundled/libsodium already have version.h already have crypto_stream_salsa20.h already have crypto_scalarmult_curve25519.h ************************************************ ************************************************ building 'zmq.libsodium' extension /usr/bin/gcc44 -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DNATIVE_LITTLE_ENDIAN=1 -Ibundled/libsodium/src/libsodium/include -Ibundled/libsodium/src/libsodium/include/sodium -I/opt/python27/include/python2.7 -c buildutils/initlibsodium.c -o build/temp.linux-x86_64-2.7/buildutils/initlibsodium.o buildutils/initlibsodium.c:10:20: error: Python.h: No such file or directory buildutils/initlibsodium.c:12: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘Methods’ buildutils/initlibsodium.c:40: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘initlibzmq’ error: command '/usr/bin/gcc44' failed with exit status 1 ---------------------------------------- Rolling back uninstall of pyzmq Command /opt/python27/bin/python2.7 -c "import setuptools;__file__='/home/fzeng/build/pyzmq/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-90NjCM-record/install-record.txt failed with error code 1 in /home/fzeng/build/pyzmq Storing complete log in /root/.pip/pip.log Can anyone help me with this?
gcc failed when pip upgrading pyzmq I work under CentOS 5.6. And I have both gcc(gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52)) and gcc44(gcc44 (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3)) on /usr/bin/. When I did pip install -U pyzmq, I received the following error message: Downloading/unpacking pyzmq from [URL] Running setup.py egg_info for package pyzmq no previously-included directories found matching 'docs/build' no previously-included directories found matching 'docs/gh-pages' warning: no previously-included files found matching 'bundled/zeromq/src/Makefile*' warning: no previously-included files found matching 'setup.cfg' warning: no previously-included files found matching 'zmq/libzmq*' warning: no previously-included files matching '__pycache__/*' found anywhere in distribution warning: no previously-included files matching '.deps/*' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '*.pyd' found anywhere in distribution warning: no previously-included files matching '.git*' found anywhere in distribution warning: no previously-included files matching '.DS_Store' found anywhere in distribution warning: no previously-included files matching '.mailmap' found anywhere in distribution warning: no previously-included files matching 'Makefile.am' found anywhere in distribution warning: no previously-included files matching 'Makefile.in' found anywhere in distribution Installing collected packages: pyzmq Found existing installation: pyzmq 2.1.11 Uninstalling pyzmq: Successfully uninstalled pyzmq Running setup.py install for pyzmq Using bundled libzmq already have bundled/zeromq already have platform.hpp checking for timer_create ************************************************ ************************************************ cc -c /tmp/timer_createbuFGwC.c -o build/temp.linux-x86_64-2.7/tmp/timer_createbuFGwC.o unable to execute cc: No such file or directory no timer_create, linking librt Using bundled libsodium already have bundled/libsodium already have version.h already have crypto_stream_salsa20.h already have crypto_scalarmult_curve25519.h ************************************************ ************************************************ building 'zmq.libsodium' extension /usr/bin/gcc44 -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DNATIVE_LITTLE_ENDIAN=1 -Ibundled/libsodium/src/libsodium/include -Ibundled/libsodium/src/libsodium/include/sodium -I/opt/python27/include/python2.7 -c buildutils/initlibsodium.c -o build/temp.linux-x86_64-2.7/buildutils/initlibsodium.o buildutils/initlibsodium.c:10:20: error: Python.h: No such file or directory buildutils/initlibsodium.c:12: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘Methods’ buildutils/initlibsodium.c:40: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘initlibzmq’ error: command '/usr/bin/gcc44' failed with exit status 1 Complete output from command /opt/python27/bin/python2.7 -c "import setuptools;__file__='/home/fzeng/build/pyzmq/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-90NjCM-record/install-record.txt: running install running build running build_py running build_ext running configure Using bundled libzmq already have bundled/zeromq already have platform.hpp checking for timer_create ************************************************ ************************************************ cc -c /tmp/timer_createbuFGwC.c -o build/temp.linux-x86_64-2.7/tmp/timer_createbuFGwC.o unable to execute cc: No such file or directory no timer_create, linking librt Using bundled libsodium already have bundled/libsodium already have version.h already have crypto_stream_salsa20.h already have crypto_scalarmult_curve25519.h ************************************************ ************************************************ building 'zmq.libsodium' extension /usr/bin/gcc44 -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DNATIVE_LITTLE_ENDIAN=1 -Ibundled/libsodium/src/libsodium/include -Ibundled/libsodium/src/libsodium/include/sodium -I/opt/python27/include/python2.7 -c buildutils/initlibsodium.c -o build/temp.linux-x86_64-2.7/buildutils/initlibsodium.o buildutils/initlibsodium.c:10:20: error: Python.h: No such file or directory buildutils/initlibsodium.c:12: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘Methods’ buildutils/initlibsodium.c:40: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘initlibzmq’ error: command '/usr/bin/gcc44' failed with exit status 1 ---------------------------------------- Rolling back uninstall of pyzmq Command /opt/python27/bin/python2.7 -c "import setuptools;__file__='/home/fzeng/build/pyzmq/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-90NjCM-record/install-record.txt failed with error code 1 in /home/fzeng/build/pyzmq Storing complete log in /root/.pip/pip.log Can anyone help me with this?
python-2.7, redhat
2
5,303
1
https://stackoverflow.com/questions/25318766/gcc-failed-when-pip-upgrading-pyzmq
11,798,810
which process is using my memory?
I have basically shut down all the processes but I still get 18GB used by running the "top" command: top - 11:23:34 up 2 days, 19:20, 2 users, load average: 0.00, 0.00, 0.00 Tasks: 202 total, 1 running, 201 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32940056k total, 19210460k used, 13729596k free, 182428k buffers Swap: 2031608k total, 0k used, 2031608k free, 18688628k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32326 csxbot 15 0 12760 1168 812 R 0.3 0.0 0:00.02 top 1 root 15 0 10368 700 584 S 0.0 0.0 0:02.17 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 What process is using my 19GB of memory? My OS is RHEL 6. How to check that. ----------------------------- UPDATED ------------------------- The "free" command basically gives the same results. Since this update is a few hours after my original post, the exact numbers could be different, but the large cache phenomenon still exists: 15GB of space is cached. total used free shared buffers cached Mem: 32168 15592 16575 0 76 14813 -/+ buffers/cache: 702 31465 Swap: 1983 0 1983
which process is using my memory? I have basically shut down all the processes but I still get 18GB used by running the "top" command: top - 11:23:34 up 2 days, 19:20, 2 users, load average: 0.00, 0.00, 0.00 Tasks: 202 total, 1 running, 201 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32940056k total, 19210460k used, 13729596k free, 182428k buffers Swap: 2031608k total, 0k used, 2031608k free, 18688628k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32326 csxbot 15 0 12760 1168 812 R 0.3 0.0 0:00.02 top 1 root 15 0 10368 700 584 S 0.0 0.0 0:02.17 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 What process is using my 19GB of memory? My OS is RHEL 6. How to check that. ----------------------------- UPDATED ------------------------- The "free" command basically gives the same results. Since this update is a few hours after my original post, the exact numbers could be different, but the large cache phenomenon still exists: 15GB of space is cached. total used free shared buffers cached Mem: 32168 15592 16575 0 76 14813 -/+ buffers/cache: 702 31465 Swap: 1983 0 1983
linux, memory, redhat
2
4,818
2
https://stackoverflow.com/questions/11798810/which-process-is-using-my-memory
15,439,529
how to take encrypted database backup in mysql
i am using mysql-5.5 and rhel5 and my intention is to use mysqldump to take the encrypted backup and compressed backup as i am using mysqldump as below mysqldump -u root -p db_name | gzip >file_name.sql.gz it will give compressed backup but not encrypted one
how to take encrypted database backup in mysql i am using mysql-5.5 and rhel5 and my intention is to use mysqldump to take the encrypted backup and compressed backup as i am using mysqldump as below mysqldump -u root -p db_name | gzip >file_name.sql.gz it will give compressed backup but not encrypted one
encryption, mysql, redhat
2
15,441
5
https://stackoverflow.com/questions/15439529/how-to-take-encrypted-database-backup-in-mysql
7,970,544
CMake &quot;make install&quot; output can&#39;t find shared Qt libraries under Redhat
I have got a Qt project which I am configuring and building using CMake. When I just type "make" to build the app, it creates an app in my build directory and all works fine. However, when I type "make install" to install into a release directory, the resulting executable won't run because it can't find shared libraries. I get an error saying: release/testapp: error while loading shared libraries: libQtGui.so.4: cannot open shared object file: No such file or directory What is "make install" doing to the executable? I thought it would just copy the file it must be doing something to the file. I am trying to execute both files from the same terminal so my environment is the same. Here is the output from ldd on the executable in the release directory (generated from "make install"): libQtGui.so.4 => not found libQtCore.so.4 => not found libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00906000) libm.so.6 => /lib/tls/libm.so.6 (0x00695000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x008fa000) libc.so.6 => /lib/tls/libc.so.6 (0x00567000) /lib/ld-linux.so.2 (0x00548000) Whereas if I run ldd on the executable in the build directory (created from "make") it outputs the following: libQtGui.so.4 => /usr/local/Trolltech/Qt-4.7.3/lib/libQtGui.so.4 (0x00560000) libQtCore.so.4 => /usr/local/Trolltech/Qt-4.7.3/lib/libQtCore.so.4 (0x00111000) libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x003ec000) libm.so.6 => /lib/tls/libm.so.6 (0x004b7000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x004da000) libc.so.6 => /lib/tls/libc.so.6 (0x033fe000) libpthread.so.0 => /lib/tls/libpthread.so.0 (0x004e4000) libgthread-2.0.so.0 => /usr/lib/libgthread-2.0.so.0 (0x004f6000) libglib-2.0.so.0 => /usr/lib/libglib-2.0.so.0 (0x02e76000) libpng12.so.0 => /usr/lib/libpng12.so.0 (0x004fa000) libz.so.1 => /usr/lib/libz.so.1 (0x0051e000) libfreetype.so.6 => /usr/lib/libfreetype.so.6 (0x02df6000) libSM.so.6 => /usr/X11R6/lib/libSM.so.6 (0x0052e000) libICE.so.6 => /usr/X11R6/lib/libICE.so.6 (0x022ac000) libXrender.so.1 => /usr/X11R6/lib/libXrender.so.1 (0x00537000) libfontconfig.so.1 => /usr/lib/libfontconfig.so.1 (0x02442000) libXext.so.6 => /usr/X11R6/lib/libXext.so.6 (0x0218a000) libX11.so.6 => /usr/X11R6/lib/libX11.so.6 (0x03937000) libdl.so.2 => /lib/libdl.so.2 (0x0053f000) librt.so.1 => /lib/tls/librt.so.1 (0x02238000) /lib/ld-linux.so.2 (0x00548000) libexpat.so.0 => /usr/lib/libexpat.so.0 (0x02377000) Here is the CMakeLists.txt file used to create these files: # CMakeLists.txt cmake_minimum_required(VERSION 2.8) project(testapp) set(CMAKE_VERBOSE_MAKEFILE OFF) find_package(Qt4 REQUIRED) set (CMAKE_C_FLAGS "-m32 -g") set (CMAKE_CXX_FLAGS "-m32 -g") set (CMAKE_INSTALL_PREFIX release) set(PROGNAME testapp) add_definitions(-Wall) set(testapp_SRCS main.cpp testapp.cpp ) set(testapp_MOC_HDRS testapp.h ) set(QT_USE_QTGUI TRUE) include(${QT_USE_FILE}) include_directories( ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_CURRENT_SOURCE_DIR} ) qt4_wrap_cpp(testapp_MOC_SRCS ${testapp_MOC_HDRS}) add_executable(${PROGNAME} ${testapp_SRCS} ${testapp_MOC_SRCS} ) target_link_libraries(${PROGNAME} ${QT_LIBRARIES} ) install(TARGETS ${PROGNAME} DESTINATION .) It's probably just something silly but why does the executable from "make" work but "make install" give an error? The files are both the same size. Thanks
CMake &quot;make install&quot; output can&#39;t find shared Qt libraries under Redhat I have got a Qt project which I am configuring and building using CMake. When I just type "make" to build the app, it creates an app in my build directory and all works fine. However, when I type "make install" to install into a release directory, the resulting executable won't run because it can't find shared libraries. I get an error saying: release/testapp: error while loading shared libraries: libQtGui.so.4: cannot open shared object file: No such file or directory What is "make install" doing to the executable? I thought it would just copy the file it must be doing something to the file. I am trying to execute both files from the same terminal so my environment is the same. Here is the output from ldd on the executable in the release directory (generated from "make install"): libQtGui.so.4 => not found libQtCore.so.4 => not found libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00906000) libm.so.6 => /lib/tls/libm.so.6 (0x00695000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x008fa000) libc.so.6 => /lib/tls/libc.so.6 (0x00567000) /lib/ld-linux.so.2 (0x00548000) Whereas if I run ldd on the executable in the build directory (created from "make") it outputs the following: libQtGui.so.4 => /usr/local/Trolltech/Qt-4.7.3/lib/libQtGui.so.4 (0x00560000) libQtCore.so.4 => /usr/local/Trolltech/Qt-4.7.3/lib/libQtCore.so.4 (0x00111000) libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x003ec000) libm.so.6 => /lib/tls/libm.so.6 (0x004b7000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x004da000) libc.so.6 => /lib/tls/libc.so.6 (0x033fe000) libpthread.so.0 => /lib/tls/libpthread.so.0 (0x004e4000) libgthread-2.0.so.0 => /usr/lib/libgthread-2.0.so.0 (0x004f6000) libglib-2.0.so.0 => /usr/lib/libglib-2.0.so.0 (0x02e76000) libpng12.so.0 => /usr/lib/libpng12.so.0 (0x004fa000) libz.so.1 => /usr/lib/libz.so.1 (0x0051e000) libfreetype.so.6 => /usr/lib/libfreetype.so.6 (0x02df6000) libSM.so.6 => /usr/X11R6/lib/libSM.so.6 (0x0052e000) libICE.so.6 => /usr/X11R6/lib/libICE.so.6 (0x022ac000) libXrender.so.1 => /usr/X11R6/lib/libXrender.so.1 (0x00537000) libfontconfig.so.1 => /usr/lib/libfontconfig.so.1 (0x02442000) libXext.so.6 => /usr/X11R6/lib/libXext.so.6 (0x0218a000) libX11.so.6 => /usr/X11R6/lib/libX11.so.6 (0x03937000) libdl.so.2 => /lib/libdl.so.2 (0x0053f000) librt.so.1 => /lib/tls/librt.so.1 (0x02238000) /lib/ld-linux.so.2 (0x00548000) libexpat.so.0 => /usr/lib/libexpat.so.0 (0x02377000) Here is the CMakeLists.txt file used to create these files: # CMakeLists.txt cmake_minimum_required(VERSION 2.8) project(testapp) set(CMAKE_VERBOSE_MAKEFILE OFF) find_package(Qt4 REQUIRED) set (CMAKE_C_FLAGS "-m32 -g") set (CMAKE_CXX_FLAGS "-m32 -g") set (CMAKE_INSTALL_PREFIX release) set(PROGNAME testapp) add_definitions(-Wall) set(testapp_SRCS main.cpp testapp.cpp ) set(testapp_MOC_HDRS testapp.h ) set(QT_USE_QTGUI TRUE) include(${QT_USE_FILE}) include_directories( ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_CURRENT_SOURCE_DIR} ) qt4_wrap_cpp(testapp_MOC_SRCS ${testapp_MOC_HDRS}) add_executable(${PROGNAME} ${testapp_SRCS} ${testapp_MOC_SRCS} ) target_link_libraries(${PROGNAME} ${QT_LIBRARIES} ) install(TARGETS ${PROGNAME} DESTINATION .) It's probably just something silly but why does the executable from "make" work but "make install" give an error? The files are both the same size. Thanks
linux, qt, cmake, redhat
2
3,786
2
https://stackoverflow.com/questions/7970544/cmake-make-install-output-cant-find-shared-qt-libraries-under-redhat
58,295,508
mongosql not starting on Red Hat 8: &quot;error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory&quot;
I have isntalled mongodb on a Red Hat 8 and it's working fine. Also, I installed mongo connector for BI through this tutorial: [URL] . Now, trying to run mongosqld with the command: mongosqld I got this error message: mongosqld: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory This is the openssl installed version: OpenSSL 1.1.1 FIPS 11 sep 2018 I already searched, but I couldn't find an answer that worked for this issue yet. Does anyone knows how to fix this? Thanks in advance!
mongosql not starting on Red Hat 8: &quot;error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory&quot; I have isntalled mongodb on a Red Hat 8 and it's working fine. Also, I installed mongo connector for BI through this tutorial: [URL] . Now, trying to run mongosqld with the command: mongosqld I got this error message: mongosqld: error while loading shared libraries: libssl.so.10: cannot open shared object file: No such file or directory This is the openssl installed version: OpenSSL 1.1.1 FIPS 11 sep 2018 I already searched, but I couldn't find an answer that worked for this issue yet. Does anyone knows how to fix this? Thanks in advance!
mongodb, redhat, connector
2
5,648
1
https://stackoverflow.com/questions/58295508/mongosql-not-starting-on-red-hat-8-error-while-loading-shared-libraries-libss
18,370,906
Can I write the server-identities value with the CLI in Red Hat JBoss EAP 6?
I'd like to know how to use the CLI to add a new secret value attribute to the server-identities attribute for a Managed Domain instance. While adding a new user via the command line we are recommended to add the secret value to the server instance. But there's not a lot of information given on how to do that. We know that this occurs in the host-master.xml file for instance, and that I understand that I can edit this in the XML. An example is as follows: <management> <security-realms> <security-realm name="ManagementRealm"> <server-identities> <secret value="superdupersecret" /> </server-identities> <authentication> <local default-user="$local" /> <properties path="mgmt-users.properties" relative-to="jboss.domain.config.dir"/> </authentication> </security-realm> . . . </management> I can view the node by running the read-resource operation as follows from the root (the "shotgun approach" to piping all the parameters and variables passed at runtime out for a quick search). I could have easily grepped it. :read-resource(recursive=true, include-runtime=true) > nameoffile.txt This shows the path of the node I'm after. "host" => {"master" => { ...snip... "core-service" => { "management" => { "ldap-connection" => undefined, "management-interface" => { "native-interface" => { "interface" => "management", "port" => expression "${jboss.management.native.port:9999}", "security-realm" => "ManagementRealm" }, "http-interface" => { "console-enabled" => true, "interface" => "management", "port" => expression "${jboss.management.http.port:9990}", "secure-port" => undefined, "security-realm" => "ManagementRealm" } }, "security-realm" => { "ManagementRealm" => { "authorization" => undefined, "plug-in" => undefined, "server-identity" => undefined, "authentication" => { "local" => { "allowed-users" => undefined, "default-user" => "$local" I can then cd into the node, but I'm not sure what the operation composition is at this level. I'm able to write other values and attributes in the CLI, but at this level I'm unsure what the method is. Any suggestions appreciated. For example, these fail. Assumptions are that I don't need to add this attribute first before writing the value, and that this node is even able to be written in the CLI (any thoughts Alexey?). [domain@localhost:9999 security-realm=ManagementRealm] /host=master/core-service=management/security-realm=ManagementRealm/server-identity/:write(server-identity="new_value") And: [domain@localhost:9999 security-realm=ManagementRealm] /host=master/core-service=management/security-realm=ManagementRealm/:write(server-identity="new_value")
Can I write the server-identities value with the CLI in Red Hat JBoss EAP 6? I'd like to know how to use the CLI to add a new secret value attribute to the server-identities attribute for a Managed Domain instance. While adding a new user via the command line we are recommended to add the secret value to the server instance. But there's not a lot of information given on how to do that. We know that this occurs in the host-master.xml file for instance, and that I understand that I can edit this in the XML. An example is as follows: <management> <security-realms> <security-realm name="ManagementRealm"> <server-identities> <secret value="superdupersecret" /> </server-identities> <authentication> <local default-user="$local" /> <properties path="mgmt-users.properties" relative-to="jboss.domain.config.dir"/> </authentication> </security-realm> . . . </management> I can view the node by running the read-resource operation as follows from the root (the "shotgun approach" to piping all the parameters and variables passed at runtime out for a quick search). I could have easily grepped it. :read-resource(recursive=true, include-runtime=true) > nameoffile.txt This shows the path of the node I'm after. "host" => {"master" => { ...snip... "core-service" => { "management" => { "ldap-connection" => undefined, "management-interface" => { "native-interface" => { "interface" => "management", "port" => expression "${jboss.management.native.port:9999}", "security-realm" => "ManagementRealm" }, "http-interface" => { "console-enabled" => true, "interface" => "management", "port" => expression "${jboss.management.http.port:9990}", "secure-port" => undefined, "security-realm" => "ManagementRealm" } }, "security-realm" => { "ManagementRealm" => { "authorization" => undefined, "plug-in" => undefined, "server-identity" => undefined, "authentication" => { "local" => { "allowed-users" => undefined, "default-user" => "$local" I can then cd into the node, but I'm not sure what the operation composition is at this level. I'm able to write other values and attributes in the CLI, but at this level I'm unsure what the method is. Any suggestions appreciated. For example, these fail. Assumptions are that I don't need to add this attribute first before writing the value, and that this node is even able to be written in the CLI (any thoughts Alexey?). [domain@localhost:9999 security-realm=ManagementRealm] /host=master/core-service=management/security-realm=ManagementRealm/server-identity/:write(server-identity="new_value") And: [domain@localhost:9999 security-realm=ManagementRealm] /host=master/core-service=management/security-realm=ManagementRealm/:write(server-identity="new_value")
jboss7.x, redhat, wildfly
2
2,918
1
https://stackoverflow.com/questions/18370906/can-i-write-the-server-identities-value-with-the-cli-in-red-hat-jboss-eap-6
13,856,242
Error during installation of Cairo package on Red Hat (RHEL)
I am working on R and I need to install the Cairo package. install.packages("Cairo") Specification R version 2.15.0 (2012-03-30) OS : Red Hat Enterprise Linux Server release 6.1 (Santiago) I'm getting following error message: xlib-backend.c:34:74: fatal error: X11/Intrinsic.h: No such file or directory compilation terminated. make: *** [xlib-backend.o] Error 1 ERROR: compilation failed for package ‘Cairo’ * removing ‘/usr/local/lib64/R/library/Cairo’ The downloaded source packages are in ‘/tmp/RtmpqtvjPA/downloaded_packages’ Updating HTML index of packages in '.Library' Making packages.html ... done Warning message: In install.packages("Cairo") : installation of package ‘Cairo’ had non-zero exit status
Error during installation of Cairo package on Red Hat (RHEL) I am working on R and I need to install the Cairo package. install.packages("Cairo") Specification R version 2.15.0 (2012-03-30) OS : Red Hat Enterprise Linux Server release 6.1 (Santiago) I'm getting following error message: xlib-backend.c:34:74: fatal error: X11/Intrinsic.h: No such file or directory compilation terminated. make: *** [xlib-backend.o] Error 1 ERROR: compilation failed for package ‘Cairo’ * removing ‘/usr/local/lib64/R/library/Cairo’ The downloaded source packages are in ‘/tmp/RtmpqtvjPA/downloaded_packages’ Updating HTML index of packages in '.Library' Making packages.html ... done Warning message: In install.packages("Cairo") : installation of package ‘Cairo’ had non-zero exit status
package, redhat, cairo, rhel
2
3,942
2
https://stackoverflow.com/questions/13856242/error-during-installation-of-cairo-package-on-red-hat-rhel
10,995,350
$PATH variable for every running process in Linux
How I can find $PATH variable for every running process on my Linux system?
$PATH variable for every running process in Linux How I can find $PATH variable for every running process on my Linux system?
linux, bash, environment-variables, redhat
2
329
2
https://stackoverflow.com/questions/10995350/path-variable-for-every-running-process-in-linux
74,008,773
How to update git version on RHEL?
I just created a fresh RHEL VM on GCP to play some Kubernetes on it. It was not having any git installed on it. I used yum package manager to install git on it, but it didn't installed the latest version of git. Current Version: 2.38.0 / 3 October 2022 Version Installed by yum: 1.8.3.1
How to update git version on RHEL? I just created a fresh RHEL VM on GCP to play some Kubernetes on it. It was not having any git installed on it. I used yum package manager to install git on it, but it didn't installed the latest version of git. Current Version: 2.38.0 / 3 October 2022 Version Installed by yum: 1.8.3.1
redhat
2
15,434
4
https://stackoverflow.com/questions/74008773/how-to-update-git-version-on-rhel
69,703,353
How can I SSH into a Redhat EC2 instance?
I've read AWS docs which says I should use: ssh -i /path/my-key-pair.pem my-instance-user-name@my-instance-public-dns-name but I have no idea what my-instance-user-name should be. For Ubuntu I always do ssh -i /path/my-key-pair.pem ubuntu @my-instance-public-dns-name after changing permissions of my key via chmod 400 /path/my-key-pair.pem
How can I SSH into a Redhat EC2 instance? I've read AWS docs which says I should use: ssh -i /path/my-key-pair.pem my-instance-user-name@my-instance-public-dns-name but I have no idea what my-instance-user-name should be. For Ubuntu I always do ssh -i /path/my-key-pair.pem ubuntu @my-instance-public-dns-name after changing permissions of my key via chmod 400 /path/my-key-pair.pem
amazon-web-services, amazon-ec2, ssh, redhat
2
1,660
1
https://stackoverflow.com/questions/69703353/how-can-i-ssh-into-a-redhat-ec2-instance
64,919,789
How to install Xvfb on RH8?
I need to install Xvfb on Redhat 8, however the usual way doesn't work: yum -y install xorg-x11-server-Xvfb No match for argument: xorg-x11-server-Xvfb Error: Unable to find a match: xorg-x11-server-Xvfb From here How to install Xvfb (X virtual framebuffer) on Redhat 6.5? I tried the suggestion: wget [URL] yum localinstall xorg-x11-server-Xvfb-1.10.4-6.el6.x86_64.rpm But that gives: Error: Problem: conflicting requests nothing provides libXdmcp.so.6()(64bit) needed by xorg-x11-server-Xvfb-1.10.4-6.el6.x86_64 nothing provides libXfont.so.1()(64bit) needed by xorg-x11-server-Xvfb-1.10.4-6.el6.x86_64 nothing provides libcrypto.so.10()(64bit) needed by xorg-x11-server-Xvfb-1.10.4-6.el6.x86_64 nothing provides xorg-x11-server-common >= 1.10.4-6.el6 needed by xorg-x11-server-Xvfb-1.10.4-6.el6.x86_64 Is there any way to install Xvfb on RH8?
How to install Xvfb on RH8? I need to install Xvfb on Redhat 8, however the usual way doesn't work: yum -y install xorg-x11-server-Xvfb No match for argument: xorg-x11-server-Xvfb Error: Unable to find a match: xorg-x11-server-Xvfb From here How to install Xvfb (X virtual framebuffer) on Redhat 6.5? I tried the suggestion: wget [URL] yum localinstall xorg-x11-server-Xvfb-1.10.4-6.el6.x86_64.rpm But that gives: Error: Problem: conflicting requests nothing provides libXdmcp.so.6()(64bit) needed by xorg-x11-server-Xvfb-1.10.4-6.el6.x86_64 nothing provides libXfont.so.1()(64bit) needed by xorg-x11-server-Xvfb-1.10.4-6.el6.x86_64 nothing provides libcrypto.so.10()(64bit) needed by xorg-x11-server-Xvfb-1.10.4-6.el6.x86_64 nothing provides xorg-x11-server-common >= 1.10.4-6.el6 needed by xorg-x11-server-Xvfb-1.10.4-6.el6.x86_64 Is there any way to install Xvfb on RH8?
redhat, yum, xvfb
2
10,329
3
https://stackoverflow.com/questions/64919789/how-to-install-xvfb-on-rh8
61,194,166
Business Central call a DMN file from another DMN
I am using RedHat Business Central and trying to call one DMN file from another. Use case - if salary > 40000 then calculate Tax from firstdmn else from seconddmn . I have added a context and literal expression in the Tax DMN decision and included a model below. But Don't know how to proceed further. Please suggest what to do.
Business Central call a DMN file from another DMN I am using RedHat Business Central and trying to call one DMN file from another. Use case - if salary > 40000 then calculate Tax from firstdmn else from seconddmn . I have added a context and literal expression in the Tax DMN decision and included a model below. But Don't know how to proceed further. Please suggest what to do.
redhat, rules, dmn, decision-model-notation
2
972
1
https://stackoverflow.com/questions/61194166/business-central-call-a-dmn-file-from-another-dmn
44,389,322
MongoDB - Permission denied for socket: 127.0.0.1:27025
I am getting an erro log after the reboot of Redhat 7 listen(): bind() failed errno:13 Permission denied for socket: 127.0.0.1:27025 systemd[1]: mongod.service: main process exited, code=exited, status=100/n/a mongod.service [Unit] Description=High-performance, schema-free document-oriented database After=network.target [Service] User=mongod Group=mongod Environment="OPTIONS=--quiet -f /etc/mongod1.conf" ExecStart=/usr/bin/mongod $OPTIONS run ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb ExecStartPre=/usr/bin/chown root:root /var/run/mongodb ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb PermissionsStartOnly=true PIDFile=/var/run/mongodb/mongod1.pid [Install] WantedBy=multi-user.target mongod1.conf #systemLog: destination: file logAppend: true path: /home/telenstanley/mongod1.log # Where and how to store data. storage: dbPath: /var/lib/mongo/db1 journal: enabled: true # engine: mmapv1: smallFiles: true # wiredTiger: # how the process runs processManagement: fork: false # fork and run in background pidFilePath: /var/run/mongodb/mongod1.pid # location of pidfile # network interfaces net: port: 27025 bindIp: 127.0.0.1 # Listen to local interface only, comment to listen on all interfaces. #security: # authorization: enabled #operationProfiling: replication: oplogSizeMB: 1024 replSetName: testrep #sharding: ## Enterprise-Only Options I am not able to find any useful answer for my problem yet.but the mongod starts successfully by running as root user from cmd sudo mongod -f mongod1.conf
MongoDB - Permission denied for socket: 127.0.0.1:27025 I am getting an erro log after the reboot of Redhat 7 listen(): bind() failed errno:13 Permission denied for socket: 127.0.0.1:27025 systemd[1]: mongod.service: main process exited, code=exited, status=100/n/a mongod.service [Unit] Description=High-performance, schema-free document-oriented database After=network.target [Service] User=mongod Group=mongod Environment="OPTIONS=--quiet -f /etc/mongod1.conf" ExecStart=/usr/bin/mongod $OPTIONS run ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb ExecStartPre=/usr/bin/chown root:root /var/run/mongodb ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb PermissionsStartOnly=true PIDFile=/var/run/mongodb/mongod1.pid [Install] WantedBy=multi-user.target mongod1.conf #systemLog: destination: file logAppend: true path: /home/telenstanley/mongod1.log # Where and how to store data. storage: dbPath: /var/lib/mongo/db1 journal: enabled: true # engine: mmapv1: smallFiles: true # wiredTiger: # how the process runs processManagement: fork: false # fork and run in background pidFilePath: /var/run/mongodb/mongod1.pid # location of pidfile # network interfaces net: port: 27025 bindIp: 127.0.0.1 # Listen to local interface only, comment to listen on all interfaces. #security: # authorization: enabled #operationProfiling: replication: oplogSizeMB: 1024 replSetName: testrep #sharding: ## Enterprise-Only Options I am not able to find any useful answer for my problem yet.but the mongod starts successfully by running as root user from cmd sudo mongod -f mongod1.conf
linux, mongodb, sockets, redhat, systemd
2
3,084
2
https://stackoverflow.com/questions/44389322/mongodb-permission-denied-for-socket-127-0-0-127025
37,757,894
Apache 403 Forbidden error when configuring a directory outside of root directory
I'm trying to config my server to use a directory /home/imagenesDBD and I Can't get it work, I have googled a lot, and made every sample I found, but nothing is working, i Just add the following to the httpd.conf file Alias "/imagenesDBD" "/home/imagenesDBD" <Directory "/home/imagenesDBD"> Options FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> The directory has 0777 permission setting The context of the Directories are I was Expecting to get this URL Working [URL] and got the following error 403 - You don't have permission to access /imagenesDBD/ on this server. thanks for your help
Apache 403 Forbidden error when configuring a directory outside of root directory I'm trying to config my server to use a directory /home/imagenesDBD and I Can't get it work, I have googled a lot, and made every sample I found, but nothing is working, i Just add the following to the httpd.conf file Alias "/imagenesDBD" "/home/imagenesDBD" <Directory "/home/imagenesDBD"> Options FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> The directory has 0777 permission setting The context of the Directories are I was Expecting to get this URL Working [URL] and got the following error 403 - You don't have permission to access /imagenesDBD/ on this server. thanks for your help
apache, redhat, selinux
2
4,035
2
https://stackoverflow.com/questions/37757894/apache-403-forbidden-error-when-configuring-a-directory-outside-of-root-director