question_id
int64
82.3k
79.7M
title_clean
stringlengths
15
158
body_clean
stringlengths
62
28.5k
full_text
stringlengths
95
28.5k
tags
stringlengths
4
80
score
int64
0
1.15k
view_count
int64
22
1.62M
answer_count
int64
0
30
link
stringlengths
58
125
27,778,593
Installing nodejs on Red Hat
I am trying to install node.js on Red Hat Enterprise Linux Server release 6.1 using the following command: sudo yum install nodejs npm I got the following error: Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libssl.so.10(libssl.so.10)(64bit) Error: Package: nodejs-devel-0.10.24-1.el6.x86_64 (epel) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) Error: Package: nodejs-devel-0.10.24-1.el6.x86_64 (epel) Requires: libssl.so.10(libssl.so.10)(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I tried the following command as well: sudo yum install -y nodejs I am getting the following error: Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libssl.so.10(libssl.so.10)(64bit) Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) How should I install it? I want to install the latest version.
Installing nodejs on Red Hat I am trying to install node.js on Red Hat Enterprise Linux Server release 6.1 using the following command: sudo yum install nodejs npm I got the following error: Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libssl.so.10(libssl.so.10)(64bit) Error: Package: nodejs-devel-0.10.24-1.el6.x86_64 (epel) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) Error: Package: nodejs-devel-0.10.24-1.el6.x86_64 (epel) Requires: libssl.so.10(libssl.so.10)(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I tried the following command as well: sudo yum install -y nodejs I am getting the following error: Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libssl.so.10(libssl.so.10)(64bit) Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) How should I install it? I want to install the latest version.
node.js, redhat, yum
17
81,357
9
https://stackoverflow.com/questions/27778593/installing-nodejs-on-red-hat
20,112,355
Apache 2.4.x manual build and install on RHEL 6.4
OS: Red Hat Enterprise Linux Server release 6.4 (Santiago) The current yum installation of apache on this OS is 2.2.15. I require the latest 2.4.x branch so have gone about installing it manually. I have noted the complete procedure I undertook, including unpacking apr and apr-util sources into the apache sources beforehand, but I guess the following is the most important part of the procedure: GATHER LATEST APACHE AND APR $ cd ~ $ mkdir apache-src $ cd apache-src $ wget [URL] $ tar xvf httpd-2.4.6.tar.gz $ cd httpd-2.4.6 $ cd srclib $ wget [URL] $ tar -xvzf apr-1.5.0.tar.gz $ mv apr-1.5.0 apr $ rm -f apr-1.5.0.tar.gz $ wget [URL] $ tar -xvzf apr-util-1.5.3.tar.gz $ mv apr-util-1.5.3 apr-util INSTALL DEVEL PACKAGES yum update --skip-broken (There is a dependency issue with the latest Chrome needing the latest libstdc++, which is not available for RHEL and CentOS) yum install apr-devel yum install apr-util-devel yum install pcre-devel INSTALL $ cd ~/apache-src/httpd-2.4.6 $ ./configure --prefix=/etc/httpd --enable-mods-shared="all" --enable-rewrite --with-included-apr $ make $ make install NOTE: At the time of running the above, /etc/http is empty. This seems to have gone fine until I attempt to start the httpd service. It seems that every module include in httpd.conf fails with a message similar to this one for mod_rewrite : httpd: Syntax error on line 148 of /etc/httpd/conf/httpd.conf: Cannot load /etc/httpd/modules/mod_rewrite.so into server: /etc/httpd/modules/mod_rewrite.so: undefined symbol: ap_global_mutex_create I've gone right through the list of enabled modules in httpd.conf and commented them out one at a time. All trigger an error as above, however the "undefined symbol: value" is often different (so not always ap_global_mutex_create ). Am I missing a step? Although I find a some portion of that error on Google, most of the solutions centre around the .so files not being reachable. That doesn't seem to be an issue here and the modules are present in /etc/http/modules . NOTE: At the time of running the above, /etc/http is empty.
Apache 2.4.x manual build and install on RHEL 6.4 OS: Red Hat Enterprise Linux Server release 6.4 (Santiago) The current yum installation of apache on this OS is 2.2.15. I require the latest 2.4.x branch so have gone about installing it manually. I have noted the complete procedure I undertook, including unpacking apr and apr-util sources into the apache sources beforehand, but I guess the following is the most important part of the procedure: GATHER LATEST APACHE AND APR $ cd ~ $ mkdir apache-src $ cd apache-src $ wget [URL] $ tar xvf httpd-2.4.6.tar.gz $ cd httpd-2.4.6 $ cd srclib $ wget [URL] $ tar -xvzf apr-1.5.0.tar.gz $ mv apr-1.5.0 apr $ rm -f apr-1.5.0.tar.gz $ wget [URL] $ tar -xvzf apr-util-1.5.3.tar.gz $ mv apr-util-1.5.3 apr-util INSTALL DEVEL PACKAGES yum update --skip-broken (There is a dependency issue with the latest Chrome needing the latest libstdc++, which is not available for RHEL and CentOS) yum install apr-devel yum install apr-util-devel yum install pcre-devel INSTALL $ cd ~/apache-src/httpd-2.4.6 $ ./configure --prefix=/etc/httpd --enable-mods-shared="all" --enable-rewrite --with-included-apr $ make $ make install NOTE: At the time of running the above, /etc/http is empty. This seems to have gone fine until I attempt to start the httpd service. It seems that every module include in httpd.conf fails with a message similar to this one for mod_rewrite : httpd: Syntax error on line 148 of /etc/httpd/conf/httpd.conf: Cannot load /etc/httpd/modules/mod_rewrite.so into server: /etc/httpd/modules/mod_rewrite.so: undefined symbol: ap_global_mutex_create I've gone right through the list of enabled modules in httpd.conf and commented them out one at a time. All trigger an error as above, however the "undefined symbol: value" is often different (so not always ap_global_mutex_create ). Am I missing a step? Although I find a some portion of that error on Google, most of the solutions centre around the .so files not being reachable. That doesn't seem to be an issue here and the modules are present in /etc/http/modules . NOTE: At the time of running the above, /etc/http is empty.
apache, apache2, redhat, rhel
17
48,878
2
https://stackoverflow.com/questions/20112355/apache-2-4-x-manual-build-and-install-on-rhel-6-4
47,826,123
What is systemd PID file?
I want to run jar file as a daemon. So I have written a shell script to "start|stop|restart" the daemon. I didn't get a chance to its working status. Can I use this script without creating a PID file? Why do we need a PID file at all? In which case we should use PID file? Below is my UNIT file. [Unit] Description=myApp After=network.target [Service] Environment=JAVA_HOME=/opt/java/jdk8 Environment=CATALINA_HOME=/opt/myApp/ User=nzpap Group=ngpap ExecStart=/kohls/apps/myApp/myapp-scripts/myapp-deploy.sh Restart=always [Install] WantedBy=multi-user.target I did not gain info by browsing through the internet about PID concept.
What is systemd PID file? I want to run jar file as a daemon. So I have written a shell script to "start|stop|restart" the daemon. I didn't get a chance to its working status. Can I use this script without creating a PID file? Why do we need a PID file at all? In which case we should use PID file? Below is my UNIT file. [Unit] Description=myApp After=network.target [Service] Environment=JAVA_HOME=/opt/java/jdk8 Environment=CATALINA_HOME=/opt/myApp/ User=nzpap Group=ngpap ExecStart=/kohls/apps/myApp/myapp-scripts/myapp-deploy.sh Restart=always [Install] WantedBy=multi-user.target I did not gain info by browsing through the internet about PID concept.
linux, redhat, centos7, systemd
17
51,565
4
https://stackoverflow.com/questions/47826123/what-is-systemd-pid-file
33,360,920
dd command error writing No space left on device
I am new to storage, trying to erase the data in the device '/dev/sdcd' why should I get 'No space left error' [root@ dev]# dd if=/dev/zero of=/dev/sdcd bs=4k dd: error writing ‘/dev/sdcd’: No space left on device 1310721+0 records in 1310720+0 records out 5368709120 bytes (5.4 GB) copied, 19.7749 s, 271 MB/s [root@ dev]# ls -l /dev/null crw-rw-rw-. 1 root root 1, 3 Oct 27 01:35 /dev/null if this is very basic question, I am sorry about that
dd command error writing No space left on device I am new to storage, trying to erase the data in the device '/dev/sdcd' why should I get 'No space left error' [root@ dev]# dd if=/dev/zero of=/dev/sdcd bs=4k dd: error writing ‘/dev/sdcd’: No space left on device 1310721+0 records in 1310720+0 records out 5368709120 bytes (5.4 GB) copied, 19.7749 s, 271 MB/s [root@ dev]# ls -l /dev/null crw-rw-rw-. 1 root root 1, 3 Oct 27 01:35 /dev/null if this is very basic question, I am sorry about that
linux, linux-device-driver, redhat
17
32,408
1
https://stackoverflow.com/questions/33360920/dd-command-error-writing-no-space-left-on-device
12,584,762
mysql_connect(): No such file or directory
I have just installed a MySQL server (version 3.23.58) on an old RedHat7. I cannot install a more recent MySQL version because of the dependencies. I cannot update librairies on this RedHat server. However, I have a problem connecting to the database with PHP. First I used PDO but I realized that PDO was not compatible with MySQL 3.23... So I used mysql_connect() . Now I have the following error: Warning: mysql_connect(): No such file or directory in /user/local/apache/htdocs/php/database.php on line 9 Error: No such file or directory My code is: $host = 'localhost'; $user = 'root'; $password = ''; $database = 'test'; $db = mysql_connect($host, $user, $password) or die('Error : ' . mysql_error()); mysql_select_db($database); I checked twice that the database exists and the login and password are correct. This is strange because the code works fine on my Windows PC with Wampp. I cannot figure out where the problem comes from. Any idea?
mysql_connect(): No such file or directory I have just installed a MySQL server (version 3.23.58) on an old RedHat7. I cannot install a more recent MySQL version because of the dependencies. I cannot update librairies on this RedHat server. However, I have a problem connecting to the database with PHP. First I used PDO but I realized that PDO was not compatible with MySQL 3.23... So I used mysql_connect() . Now I have the following error: Warning: mysql_connect(): No such file or directory in /user/local/apache/htdocs/php/database.php on line 9 Error: No such file or directory My code is: $host = 'localhost'; $user = 'root'; $password = ''; $database = 'test'; $db = mysql_connect($host, $user, $password) or die('Error : ' . mysql_error()); mysql_select_db($database); I checked twice that the database exists and the login and password are correct. This is strange because the code works fine on my Windows PC with Wampp. I cannot figure out where the problem comes from. Any idea?
php, mysql, redhat, mysql-connect
16
86,156
6
https://stackoverflow.com/questions/12584762/mysql-connect-no-such-file-or-directory
68,223,306
Execute multiple commands with && in systemd service ExecStart on RedHat 7.9
I have this systemd service on Red Hat Enterprise Linux Server 7.9 (Maipo) [Unit] Description = EUM Server Service PartOf=eum.service # Start this unit after the app.service start After=eum.service After=eum-db.service [Service] Type=forking User=root WorkingDirectory=/prod/appdynamics/EUMServer/eum-processor/ ExecStart=/usr/bin/sleep 45 && /bin/bash bin/eum.sh start RemainAfterExit=true ExecStop=/bin/bash bin/eum.sh stop [Install] WantedBy=multi-user.target that fails because it tries to pick everything after /usr/bin/sleep as parameters to that command. I just want to execute the /usr/bin/sleep 45 and on success execute bin/eum.sh start . How can I make it work? ● eum-server.service - EUM Server Service Loaded: loaded (/etc/systemd/system/eum-server.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Fri 2021-07-02 00:00:53 CEST; 9min ago Process: 13860 ExecStart=/usr/bin/sleep 45 && /bin/bash bin/eum.sh start (code=exited, status=1/FAILURE) Jul 02 00:00:53 lmlift06mnp001 systemd[1]: Starting EUM Server Service... Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘&&’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘/bin/bash’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘bin/eum.sh’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘start’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: Try '/usr/bin/sleep --help' for more information. Jul 02 00:00:53 lmlift06mnp001 systemd[1]: eum-server.service: control process exited, code=exited status=1 Jul 02 00:00:53 lmlift06mnp001 systemd[1]: Failed to start EUM Server Service. Jul 02 00:00:53 lmlift06mnp001 systemd[1]: Unit eum-server.service entered failed state. Jul 02 00:00:53 lmlift06mnp001 systemd[1]: eum-server.service failed.
Execute multiple commands with && in systemd service ExecStart on RedHat 7.9 I have this systemd service on Red Hat Enterprise Linux Server 7.9 (Maipo) [Unit] Description = EUM Server Service PartOf=eum.service # Start this unit after the app.service start After=eum.service After=eum-db.service [Service] Type=forking User=root WorkingDirectory=/prod/appdynamics/EUMServer/eum-processor/ ExecStart=/usr/bin/sleep 45 && /bin/bash bin/eum.sh start RemainAfterExit=true ExecStop=/bin/bash bin/eum.sh stop [Install] WantedBy=multi-user.target that fails because it tries to pick everything after /usr/bin/sleep as parameters to that command. I just want to execute the /usr/bin/sleep 45 and on success execute bin/eum.sh start . How can I make it work? ● eum-server.service - EUM Server Service Loaded: loaded (/etc/systemd/system/eum-server.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Fri 2021-07-02 00:00:53 CEST; 9min ago Process: 13860 ExecStart=/usr/bin/sleep 45 && /bin/bash bin/eum.sh start (code=exited, status=1/FAILURE) Jul 02 00:00:53 lmlift06mnp001 systemd[1]: Starting EUM Server Service... Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘&&’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘/bin/bash’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘bin/eum.sh’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘start’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: Try '/usr/bin/sleep --help' for more information. Jul 02 00:00:53 lmlift06mnp001 systemd[1]: eum-server.service: control process exited, code=exited status=1 Jul 02 00:00:53 lmlift06mnp001 systemd[1]: Failed to start EUM Server Service. Jul 02 00:00:53 lmlift06mnp001 systemd[1]: Unit eum-server.service entered failed state. Jul 02 00:00:53 lmlift06mnp001 systemd[1]: eum-server.service failed.
redhat, systemd
16
22,481
1
https://stackoverflow.com/questions/68223306/execute-multiple-commands-with-in-systemd-service-execstart-on-redhat-7-9
25,379,410
Ping Service to stop OpenShift Application from IDLE?
I am running a lightweight API in the OpenShift Cloud. I just realized that after 48h the application goes into IDLE mode. Is there kind of a ping service to avoid this issue? best M
Ping Service to stop OpenShift Application from IDLE? I am running a lightweight API in the OpenShift Cloud. I just realized that after 48h the application goes into IDLE mode. Is there kind of a ping service to avoid this issue? best M
cloud, openshift, redhat
16
6,394
2
https://stackoverflow.com/questions/25379410/ping-service-to-stop-openshift-application-from-idle
8,258,647
RPM - Install time parameters
I have packaged my application into an RPM package, say, myapp.rpm . While installing this application, I would like to receive some inputs from the user (an example for input could be - environment where the app is getting installed - "dev", "qa", "uat", "prod"). Based on the input, the application will install the appropriate files. Is there a way to pass parameters while installing the application? P.S.: A possible solution could be to create an RPM package for each environment. However, in our scenario, this is not a viable option since we have around 20 environments and we do not wish to have 20 different packages for the same application.
RPM - Install time parameters I have packaged my application into an RPM package, say, myapp.rpm . While installing this application, I would like to receive some inputs from the user (an example for input could be - environment where the app is getting installed - "dev", "qa", "uat", "prod"). Based on the input, the application will install the appropriate files. Is there a way to pass parameters while installing the application? P.S.: A possible solution could be to create an RPM package for each environment. However, in our scenario, this is not a viable option since we have around 20 environments and we do not wish to have 20 different packages for the same application.
linux, unix, build, redhat, rpm
16
17,097
3
https://stackoverflow.com/questions/8258647/rpm-install-time-parameters
18,827,396
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 0: ordinal not in range(128)
I'm having troubles in encoding characters in utf-8. I'm using Django, and I get this error when I tried to send an Android notification with non-plain text. I tried to find where the source of the error and I managed to figure out that the source of the error is not in my project. In python shell, I type: 'ç'.encode('utf8') and I get this error: Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 0: ordinal not in range(128) I get the same errors with: 'á'.encode('utf-8') unicode('ç') 'ç'.encode('utf-8','ignore') I get errors with smart_text, force_text and smart_bytes too. Is that a problem with Python, my OS, or another thing? I'm running Python 2.6.6 on a Red Hat version 4.4.7-3
UnicodeDecodeError: &#39;ascii&#39; codec can&#39;t decode byte 0xe7 in position 0: ordinal not in range(128) I'm having troubles in encoding characters in utf-8. I'm using Django, and I get this error when I tried to send an Android notification with non-plain text. I tried to find where the source of the error and I managed to figure out that the source of the error is not in my project. In python shell, I type: 'ç'.encode('utf8') and I get this error: Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 0: ordinal not in range(128) I get the same errors with: 'á'.encode('utf-8') unicode('ç') 'ç'.encode('utf-8','ignore') I get errors with smart_text, force_text and smart_bytes too. Is that a problem with Python, my OS, or another thing? I'm running Python 2.6.6 on a Red Hat version 4.4.7-3
python, django, encoding, utf-8, redhat
16
37,562
2
https://stackoverflow.com/questions/18827396/unicodedecodeerror-ascii-codec-cant-decode-byte-0xe7-in-position-0-ordinal
40,898,077
systemd systemctl stop aggressively kills subprocesses
I've a daemon-like process that starts two subprocesses (and one of the subprocesses starts ~10 others). When I systemctl stop my process the child subprocesses appear to be 'aggressively' killed by systemctl - which doesn't give my process a chance to clean up. How do I get systemctl stop to quit the aggressive kill and thus to allow my process to orchestrate an orderly clean up? I tried timeoutSec=30 to no avail.
systemd systemctl stop aggressively kills subprocesses I've a daemon-like process that starts two subprocesses (and one of the subprocesses starts ~10 others). When I systemctl stop my process the child subprocesses appear to be 'aggressively' killed by systemctl - which doesn't give my process a chance to clean up. How do I get systemctl stop to quit the aggressive kill and thus to allow my process to orchestrate an orderly clean up? I tried timeoutSec=30 to no avail.
redhat, systemd, systemctl
16
22,582
2
https://stackoverflow.com/questions/40898077/systemd-systemctl-stop-aggressively-kills-subprocesses
6,902,254
stdlib.h: no such file or directory
I am using various stdlib functions like srand(), etc. I have the line #include <stdlib.h> at the top of my code. I entered this on the command line: # find / -name stdlib.h find: `/home/dmurvihill/.gvfs: permission denied /usr/include/stdlib.h /usr/include/bits/stdlib.h So, stdlib.h is clearly in /usr/include. My preprocessor: # gcc -print-prog-name=cc1 /usr/libexec/gcc/x86_64-redhat-linux/4.5.1/cc1 My preprocessor's default search path: # /usr/libexec/gcc/x86_64-redhat-linux/4.5.1/cc1 -v ignoring nonexistent directory "/usr/lib/gcc/x86_64-redhat-linux/4.5.1/include-fixed" ignoring nonexistent directory "/usr/lib/gcc/x86_64-redhat-linux/4.5.1/../../../../x86_64-redhat-linux/include" #include "..." search starts here: #include <...> search starts here: /usr/local/include /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include /usr/include End of search list. So, stdlib.h is clearly in /usr/include, which is most definitely supposed to be searched by my preprocessor, but I still get this error! /path/to/cpa_sample_code_main.c:15:20: fatal error: stdlib.h: No such file or directory compilation terminated Update A program I wrote to test this code: #include <stdio.h> #include <stdlib.h> #include <linux/time.h> int main() { printf("Hello, World!\n"); printf("Getting time...\n"); time_t seconds; time(&seconds); printf("Seeding generator...\n"); srand((unsigned int)seconds); printf("Getting random number...\n"); int value = rand(); printf("It is %d!",value); printf("Goodbye, cruel world!"); return 0; } The command gcc -H -v -fsyntax-only stdlib_test.c output Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.5.1/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=[URL] --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,lto --enable-plugin --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.5.1 20100924 (Red Hat 4.5.1-4) (GCC) COLLECT_GCC_OPTIONS='-H' '-v' '-fsyntax-only' '-mtune=generic' '-march=x86-64' /usr/libexec/gcc/x86_64-redhat-linux/4.5.1/cc1 -quiet -v -H /CRF_Verify/stdlib_test.c -quiet -dumpbase stdlib_test.c -mtune=generic -march=x86-64 -auxbase stdlib_test -version -fsyntax-only -o /dev/null GNU C (GCC) version 4.5.1 20100924 (Red Hat 4.5.1-4) (x86_64-redhat-linux) compiled by GNU C version 4.5.1 20100924 (Red Hat 4.5.1-4), GMP version 4.3.1, MPFR version 2.4.2, MPC version 0.8.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 ignoring nonexistent directory "/usr/lib/gcc/x86_64-redhat-linux/4.5.1/include-fixed" ignoring nonexistent directory "/usr/lib/gcc/x86_64-redhat-linux/4.5.1/../../../../x86_64-redhat-linux/include" #include "..." search starts here: #include <...> search starts here: /usr/local/include /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include /usr/include End of search list. GNU C (GCC) version 4.5.1 20100924 (Red Hat 4.5.1-4) (x86_64-redhat-linux) compiled by GNU C version 4.5.1 20100924 (Red Hat 4.5.1-4), GMP version 4.3.1, MPFR version 2.4.2, MPC version 0.8.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 Compiler executable checksum: ea394b69293dd698607206e8e43d607e . /usr/include/stdio.h .. /usr/include/features.h ... /usr/include/sys/cdefs.h .... /usr/include/bits/wordsize.h ... /usr/include/gnu/stubs.h .... /usr/include/bits/wordsize.h .... /usr/include/gnu/stubs-64.h .. /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h .. /usr/include/bits/types.h ... /usr/include/bits/wordsize.h ... /usr/include/bits/typesizes.h .. /usr/include/libio.h ... /usr/include/_G_config.h .... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h .... /usr/include/wchar.h ... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stdarg.h .. /usr/include/bits/stdio_lim.h .. /usr/include/bits/sys_errlist.h . /usr/include/stdlib.h .. /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h .. /usr/include/bits/waitflags.h .. /usr/include/bits/waitstatus.h ... /usr/include/endian.h .... /usr/include/bits/endian.h .... /usr/include/bits/byteswap.h ..... /usr/include/bits/wordsize.h .. /usr/include/sys/types.h ... /usr/include/time.h ... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h ... /usr/include/sys/select.h .... /usr/include/bits/select.h ..... /usr/include/bits/wordsize.h .... /usr/include/bits/sigset.h .... /usr/include/time.h .... /usr/include/bits/time.h ... /usr/include/sys/sysmacros.h ... /usr/include/bits/pthreadtypes.h .... /usr/include/bits/wordsize.h .. /usr/include/alloca.h ... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h . /usr/include/linux/time.h .. /usr/include/linux/types.h ... /usr/include/asm/types.h .... /usr/include/asm-generic/types.h ..... /usr/include/asm-generic/int-ll64.h ...... /usr/include/asm/bitsperlong.h ....... /usr/include/asm-generic/bitsperlong.h ... /usr/include/linux/posix_types.h .... /usr/include/linux/stddef.h .... /usr/include/asm/posix_types.h ..... /usr/include/asm/posix_types_64.h In file included from /CRF_Verify/stdlib_test.c:3:0: /usr/include/linux/time.h:9:8: error: redefinition of ‘struct timespec’ /usr/include/time.h:120:8: note: originally defined here /usr/include/linux/time.h:15:8: error: redefinition of ‘struct timeval’ /usr/include/bits/time.h:75:8: note: originally defined here Multiple include guards may be useful for: /usr/include/asm/posix_types.h /usr/include/bits/byteswap.h /usr/include/bits/endian.h /usr/include/bits/select.h /usr/include/bits/sigset.h /usr/include/bits/stdio_lim.h /usr/include/bits/sys_errlist.h /usr/include/bits/time.h /usr/include/bits/typesizes.h /usr/include/bits/waitflags.h /usr/include/bits/waitstatus.h /usr/include/gnu/stubs-64.h /usr/include/gnu/stubs.h /usr/include/wchar.h
stdlib.h: no such file or directory I am using various stdlib functions like srand(), etc. I have the line #include <stdlib.h> at the top of my code. I entered this on the command line: # find / -name stdlib.h find: `/home/dmurvihill/.gvfs: permission denied /usr/include/stdlib.h /usr/include/bits/stdlib.h So, stdlib.h is clearly in /usr/include. My preprocessor: # gcc -print-prog-name=cc1 /usr/libexec/gcc/x86_64-redhat-linux/4.5.1/cc1 My preprocessor's default search path: # /usr/libexec/gcc/x86_64-redhat-linux/4.5.1/cc1 -v ignoring nonexistent directory "/usr/lib/gcc/x86_64-redhat-linux/4.5.1/include-fixed" ignoring nonexistent directory "/usr/lib/gcc/x86_64-redhat-linux/4.5.1/../../../../x86_64-redhat-linux/include" #include "..." search starts here: #include <...> search starts here: /usr/local/include /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include /usr/include End of search list. So, stdlib.h is clearly in /usr/include, which is most definitely supposed to be searched by my preprocessor, but I still get this error! /path/to/cpa_sample_code_main.c:15:20: fatal error: stdlib.h: No such file or directory compilation terminated Update A program I wrote to test this code: #include <stdio.h> #include <stdlib.h> #include <linux/time.h> int main() { printf("Hello, World!\n"); printf("Getting time...\n"); time_t seconds; time(&seconds); printf("Seeding generator...\n"); srand((unsigned int)seconds); printf("Getting random number...\n"); int value = rand(); printf("It is %d!",value); printf("Goodbye, cruel world!"); return 0; } The command gcc -H -v -fsyntax-only stdlib_test.c output Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.5.1/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=[URL] --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,lto --enable-plugin --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.5.1 20100924 (Red Hat 4.5.1-4) (GCC) COLLECT_GCC_OPTIONS='-H' '-v' '-fsyntax-only' '-mtune=generic' '-march=x86-64' /usr/libexec/gcc/x86_64-redhat-linux/4.5.1/cc1 -quiet -v -H /CRF_Verify/stdlib_test.c -quiet -dumpbase stdlib_test.c -mtune=generic -march=x86-64 -auxbase stdlib_test -version -fsyntax-only -o /dev/null GNU C (GCC) version 4.5.1 20100924 (Red Hat 4.5.1-4) (x86_64-redhat-linux) compiled by GNU C version 4.5.1 20100924 (Red Hat 4.5.1-4), GMP version 4.3.1, MPFR version 2.4.2, MPC version 0.8.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 ignoring nonexistent directory "/usr/lib/gcc/x86_64-redhat-linux/4.5.1/include-fixed" ignoring nonexistent directory "/usr/lib/gcc/x86_64-redhat-linux/4.5.1/../../../../x86_64-redhat-linux/include" #include "..." search starts here: #include <...> search starts here: /usr/local/include /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include /usr/include End of search list. GNU C (GCC) version 4.5.1 20100924 (Red Hat 4.5.1-4) (x86_64-redhat-linux) compiled by GNU C version 4.5.1 20100924 (Red Hat 4.5.1-4), GMP version 4.3.1, MPFR version 2.4.2, MPC version 0.8.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 Compiler executable checksum: ea394b69293dd698607206e8e43d607e . /usr/include/stdio.h .. /usr/include/features.h ... /usr/include/sys/cdefs.h .... /usr/include/bits/wordsize.h ... /usr/include/gnu/stubs.h .... /usr/include/bits/wordsize.h .... /usr/include/gnu/stubs-64.h .. /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h .. /usr/include/bits/types.h ... /usr/include/bits/wordsize.h ... /usr/include/bits/typesizes.h .. /usr/include/libio.h ... /usr/include/_G_config.h .... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h .... /usr/include/wchar.h ... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stdarg.h .. /usr/include/bits/stdio_lim.h .. /usr/include/bits/sys_errlist.h . /usr/include/stdlib.h .. /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h .. /usr/include/bits/waitflags.h .. /usr/include/bits/waitstatus.h ... /usr/include/endian.h .... /usr/include/bits/endian.h .... /usr/include/bits/byteswap.h ..... /usr/include/bits/wordsize.h .. /usr/include/sys/types.h ... /usr/include/time.h ... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h ... /usr/include/sys/select.h .... /usr/include/bits/select.h ..... /usr/include/bits/wordsize.h .... /usr/include/bits/sigset.h .... /usr/include/time.h .... /usr/include/bits/time.h ... /usr/include/sys/sysmacros.h ... /usr/include/bits/pthreadtypes.h .... /usr/include/bits/wordsize.h .. /usr/include/alloca.h ... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h . /usr/include/linux/time.h .. /usr/include/linux/types.h ... /usr/include/asm/types.h .... /usr/include/asm-generic/types.h ..... /usr/include/asm-generic/int-ll64.h ...... /usr/include/asm/bitsperlong.h ....... /usr/include/asm-generic/bitsperlong.h ... /usr/include/linux/posix_types.h .... /usr/include/linux/stddef.h .... /usr/include/asm/posix_types.h ..... /usr/include/asm/posix_types_64.h In file included from /CRF_Verify/stdlib_test.c:3:0: /usr/include/linux/time.h:9:8: error: redefinition of ‘struct timespec’ /usr/include/time.h:120:8: note: originally defined here /usr/include/linux/time.h:15:8: error: redefinition of ‘struct timeval’ /usr/include/bits/time.h:75:8: note: originally defined here Multiple include guards may be useful for: /usr/include/asm/posix_types.h /usr/include/bits/byteswap.h /usr/include/bits/endian.h /usr/include/bits/select.h /usr/include/bits/sigset.h /usr/include/bits/stdio_lim.h /usr/include/bits/sys_errlist.h /usr/include/bits/time.h /usr/include/bits/typesizes.h /usr/include/bits/waitflags.h /usr/include/bits/waitstatus.h /usr/include/gnu/stubs-64.h /usr/include/gnu/stubs.h /usr/include/wchar.h
c, gcc, include, c-preprocessor, redhat
16
81,032
3
https://stackoverflow.com/questions/6902254/stdlib-h-no-such-file-or-directory
72,690,495
Interact with podman docker via socket in Redhat 9
I'm trying to migrate one of my dev boxes over from centos 8 to RHEL9. I rely heavily on docker and noticed when I tried to run a docker command on the RHEL box it installed podman-docker. This seemed to go smoothly; I was able to pull an image, launch, build, commit a new version without problem using the docker commands I knew already. The problem I have encountered though is I can't seem to interact with it via the docker socket (which seems to be a link to the podman one). If I run the docker command: [@rhel9 ~]$ docker images Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg. REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/redhat/ubi9 dev_image de371523ca26 6 hours ago 805 MB docker.io/redhat/ubi9 latest 9ad46cd10362 6 days ago 230 MB it has my images listed as expected. I should be able to also run: [@rhel9 ~]$ curl --unix-socket /var/run/docker.sock -H 'Content-Type: application/json' [URL] | jq . % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3 100 3 0 0 55 0 --:--:-- --:--:-- --:--:-- 55 [] but as you can see, nothing is coming back. The socket is up and running as I can ping it without issue: [@rhel9 ~]$ curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock [URL] OK I also tried the curl commands using the podman socket directly but it had the same results. Is there something I am missing or a trick to getting it to work so that I can interact with docker/podman via the socket?
Interact with podman docker via socket in Redhat 9 I'm trying to migrate one of my dev boxes over from centos 8 to RHEL9. I rely heavily on docker and noticed when I tried to run a docker command on the RHEL box it installed podman-docker. This seemed to go smoothly; I was able to pull an image, launch, build, commit a new version without problem using the docker commands I knew already. The problem I have encountered though is I can't seem to interact with it via the docker socket (which seems to be a link to the podman one). If I run the docker command: [@rhel9 ~]$ docker images Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg. REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/redhat/ubi9 dev_image de371523ca26 6 hours ago 805 MB docker.io/redhat/ubi9 latest 9ad46cd10362 6 days ago 230 MB it has my images listed as expected. I should be able to also run: [@rhel9 ~]$ curl --unix-socket /var/run/docker.sock -H 'Content-Type: application/json' [URL] | jq . % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3 100 3 0 0 55 0 --:--:-- --:--:-- --:--:-- 55 [] but as you can see, nothing is coming back. The socket is up and running as I can ping it without issue: [@rhel9 ~]$ curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock [URL] OK I also tried the curl commands using the podman socket directly but it had the same results. Is there something I am missing or a trick to getting it to work so that I can interact with docker/podman via the socket?
docker, redhat, podman
15
59,735
3
https://stackoverflow.com/questions/72690495/interact-with-podman-docker-via-socket-in-redhat-9
45,008,355
Elasticsearch process memory locking failed
I have set boostrap.memory_lock=true Updated /etc/security/limits.conf added memlock unlimited for elastic search user My elastic search was running fine for many months. Suddenly it failed 1 day back. In logs I can see below error and process never starts ERROR: bootstrap checks failed memory locking requested for elasticsearch process but memory is not locked I hit ulimit -as and I can see max locked memory set to unlimited. What is going wrong here? I have been trying for hours but all in vain. Please help. OS is RHEL 7.2 Elasticsearch 5.1.2 ulimit -as output core file size (blocks -c) 0 data seg size (kbytes -d) unlimited scheduling policy (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 83552 max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 65536 pipe size (512 bytes, -q) 8 POSIX message queues (bytes,-q) 819200 real-time priority (-r) 0 stack size kbytes, -s) 8192 cpu time seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Elasticsearch process memory locking failed I have set boostrap.memory_lock=true Updated /etc/security/limits.conf added memlock unlimited for elastic search user My elastic search was running fine for many months. Suddenly it failed 1 day back. In logs I can see below error and process never starts ERROR: bootstrap checks failed memory locking requested for elasticsearch process but memory is not locked I hit ulimit -as and I can see max locked memory set to unlimited. What is going wrong here? I have been trying for hours but all in vain. Please help. OS is RHEL 7.2 Elasticsearch 5.1.2 ulimit -as output core file size (blocks -c) 0 data seg size (kbytes -d) unlimited scheduling policy (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 83552 max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 65536 pipe size (512 bytes, -q) 8 POSIX message queues (bytes,-q) 819200 real-time priority (-r) 0 stack size kbytes, -s) 8192 cpu time seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
linux, elasticsearch, redhat
15
33,681
9
https://stackoverflow.com/questions/45008355/elasticsearch-process-memory-locking-failed
3,892,282
Will Java compiled in windows work in Linux?
My Java program is in working order when i use it under Windows(Eclipse and Bluej). I compress it to a Jar and send it to my red hat and bang. nothing works. It breaks on the weirdest things, such as text field set text will not show, JPasswordfield just disappeared, Java AWT ROBOT dies too... the list goes on, first i thought it must be my Linux JRE is out of date, but i installed latest JRE then the JDK with no improvement at all. I have a feeling that i miss understood the Java cross plat ability. I also tried to remove all of my functions and guts to see what is breaking but it seems every second thing is breaking, other than the some of the major GUI components and most of the back end stuff. basically any thing that uses some thing fancy will blowup in my face, such as making a text field in to a password field... This is my first time posting ;) please be nice to the newbie! Thanks!!! SOLVED!!! Yay. Problem solved!!! It was because my Java path isn't set, so my GCC/GCJ jumped in instead of my oracle java, even tho i used java -jar xxx.jar. so I put in the java directory path from of my java -jar xxx.jar and worked like a charm. unless you set the path, you have have to do this manually /usr/java/jdk1.6.0_21/jre/bin/java -jar xxxxx.jar java -version to check if your real java is running or if it s still GCJ
Will Java compiled in windows work in Linux? My Java program is in working order when i use it under Windows(Eclipse and Bluej). I compress it to a Jar and send it to my red hat and bang. nothing works. It breaks on the weirdest things, such as text field set text will not show, JPasswordfield just disappeared, Java AWT ROBOT dies too... the list goes on, first i thought it must be my Linux JRE is out of date, but i installed latest JRE then the JDK with no improvement at all. I have a feeling that i miss understood the Java cross plat ability. I also tried to remove all of my functions and guts to see what is breaking but it seems every second thing is breaking, other than the some of the major GUI components and most of the back end stuff. basically any thing that uses some thing fancy will blowup in my face, such as making a text field in to a password field... This is my first time posting ;) please be nice to the newbie! Thanks!!! SOLVED!!! Yay. Problem solved!!! It was because my Java path isn't set, so my GCC/GCJ jumped in instead of my oracle java, even tho i used java -jar xxx.jar. so I put in the java directory path from of my java -jar xxx.jar and worked like a charm. unless you set the path, you have have to do this manually /usr/java/jdk1.6.0_21/jre/bin/java -jar xxxxx.jar java -version to check if your real java is running or if it s still GCJ
linux, cross-platform, redhat, java
15
28,271
10
https://stackoverflow.com/questions/3892282/will-java-compiled-in-windows-work-in-linux
40,231,172
How to install vim on RedHat via commmandline
I am running a RHEL 7.2 (Maipo) on an AWS instance with commandline access. To my greatest surprise, vim needs to be installed and as I am fairly new to RedHat, I was at a loss initially as to the easiest way to install it, so I am adding it below for future reference so beginners like myself can just crack on with it.
How to install vim on RedHat via commmandline I am running a RHEL 7.2 (Maipo) on an AWS instance with commandline access. To my greatest surprise, vim needs to be installed and as I am fairly new to RedHat, I was at a loss initially as to the easiest way to install it, so I am adding it below for future reference so beginners like myself can just crack on with it.
vim, installation, command-line-interface, redhat
15
33,301
1
https://stackoverflow.com/questions/40231172/how-to-install-vim-on-redhat-via-commmandline
22,538,185
Openshift app redirecting to [URL]
I have hosted an app on Redhat Open shift. I didn't change anything but it started redirecting to [URL] and throwing 404 error. Can anyone help me in solving this?
Openshift app redirecting to [URL] I have hosted an app on Redhat Open shift. I didn't change anything but it started redirecting to [URL] and throwing 404 error. Can anyone help me in solving this?
http-status-code-404, redhat, openshift, cname
15
3,498
6
https://stackoverflow.com/questions/22538185/openshift-app-redirecting-to-https-domain-name-app
8,747,533
Jenkins / Hudson CI Minimum Requirements for a linux RH installation
We are planning on using Jenkins (used to be Hudson) for the automated builds of our project. I need to find out what it needs from a system requirements standpoint (RAM, disk, CPU) for a Linux RH installation. We will be testing a Mobile application project. I did check this post but couldn't find a response.
Jenkins / Hudson CI Minimum Requirements for a linux RH installation We are planning on using Jenkins (used to be Hudson) for the automated builds of our project. I need to find out what it needs from a system requirements standpoint (RAM, disk, CPU) for a Linux RH installation. We will be testing a Mobile application project. I did check this post but couldn't find a response.
linux, hudson, redhat
15
20,618
1
https://stackoverflow.com/questions/8747533/jenkins-hudson-ci-minimum-requirements-for-a-linux-rh-installation
32,264,427
What is the difference between ~/ and ~ in Linux?
I am novice to Linux, using it a little more than a year. Can anybody help me resolve my question? When I use ~/ only it shows user home directory. Why does it not work in the case of using ~ alone to specify path to a file or directory?
What is the difference between ~/ and ~ in Linux? I am novice to Linux, using it a little more than a year. Can anybody help me resolve my question? When I use ~/ only it shows user home directory. Why does it not work in the case of using ~ alone to specify path to a file or directory?
linux, redhat
15
24,883
1
https://stackoverflow.com/questions/32264427/what-is-the-difference-between-and-in-linux
60,622,192
Keycloak: Session cookies are missing within the token request with the new Chrome SameSite/Secure cookie enforcement
Recently my application using Keycloak stopped working with a 400 token request after authenticating. What I found so far is that within the token request, the Keycloak cookies (AUTH_SESSION_ID, KEYCLOAK_IDENTITY, KEYCLOAK_SESSION) are not sent within the request headers causing the request for a token to fail and the application gets a session error. By digging more, I found that Chrome blocks now cookies without SameSite attribute set, which is the case for the keycloak cookies and that's why they are never parsed within the token acquisition request after authenticating. The error I get:- [URL] [URL] This is very serious as it blocks applications secured by Keycloak library to be able to communicate with the keycloak server. Update : With the new google chrome cookie SameSite attribute, any third party library using cookies without SameSite attribute properly set, the cookie will be ignored. [URL] [URL]
Keycloak: Session cookies are missing within the token request with the new Chrome SameSite/Secure cookie enforcement Recently my application using Keycloak stopped working with a 400 token request after authenticating. What I found so far is that within the token request, the Keycloak cookies (AUTH_SESSION_ID, KEYCLOAK_IDENTITY, KEYCLOAK_SESSION) are not sent within the request headers causing the request for a token to fail and the application gets a session error. By digging more, I found that Chrome blocks now cookies without SameSite attribute set, which is the case for the keycloak cookies and that's why they are never parsed within the token acquisition request after authenticating. The error I get:- [URL] [URL] This is very serious as it blocks applications secured by Keycloak library to be able to communicate with the keycloak server. Update : With the new google chrome cookie SameSite attribute, any third party library using cookies without SameSite attribute properly set, the cookie will be ignored. [URL] [URL]
google-chrome, single-sign-on, redhat, keycloak, keycloak-services
15
36,362
3
https://stackoverflow.com/questions/60622192/keycloak-session-cookies-are-missing-within-the-token-request-with-the-new-chro
28,802,298
Yum repositories don&#39;t work unless there are exceptions in the AWS firewall. How do I make the exceptions based on a DNS name?
When I try to install something via yum (e.g., yum install java), I get the following: Could not contact CDS load balancer rhui2-cds01.us-west-2.aws.ce.redhat.com, trying others. Could not contact any CDS load balancers: rhui2-cds01.us-west-2.aws.ce.redhat.com, rhui2-cds02.us-west-2.aws.ce.redhat.com. Earlier today I installed various yum packages. This evening I tried several, but none worked. This link explains that certain firewall rules need to be made: [URL] I don't have an explanation why all Yum install commands were working earlier today. Several different ones later stopped working. Here is the solution: via the AWS console, I opened all traffic over port 443 (inbound and outbound traffic). This isn't an ideal solution or a permanent solution. The security groups in the AWS console only permit filtering based on IP addresses and IP address ranges. DNS names aren't part of the filtering. Using AWS, how can I open port 443 and port 80 to specific DNS names?
Yum repositories don&#39;t work unless there are exceptions in the AWS firewall. How do I make the exceptions based on a DNS name? When I try to install something via yum (e.g., yum install java), I get the following: Could not contact CDS load balancer rhui2-cds01.us-west-2.aws.ce.redhat.com, trying others. Could not contact any CDS load balancers: rhui2-cds01.us-west-2.aws.ce.redhat.com, rhui2-cds02.us-west-2.aws.ce.redhat.com. Earlier today I installed various yum packages. This evening I tried several, but none worked. This link explains that certain firewall rules need to be made: [URL] I don't have an explanation why all Yum install commands were working earlier today. Several different ones later stopped working. Here is the solution: via the AWS console, I opened all traffic over port 443 (inbound and outbound traffic). This isn't an ideal solution or a permanent solution. The security groups in the AWS console only permit filtering based on IP addresses and IP address ranges. DNS names aren't part of the filtering. Using AWS, how can I open port 443 and port 80 to specific DNS names?
amazon-web-services, redhat, yum
14
29,746
5
https://stackoverflow.com/questions/28802298/yum-repositories-dont-work-unless-there-are-exceptions-in-the-aws-firewall-how
41,156,556
What exact command is to install pm2 on offline RHEL
First of all it's not a duplicate question of below:- How to install npm -g on offline server I install npmbox ( [URL] ) on my offline REHL server but I'm still do not know how to install pm2 or any other package using that. Please advise.
What exact command is to install pm2 on offline RHEL First of all it's not a duplicate question of below:- How to install npm -g on offline server I install npmbox ( [URL] ) on my offline REHL server but I'm still do not know how to install pm2 or any other package using that. Please advise.
node.js, linux, ubuntu, redhat, pm2
14
23,579
5
https://stackoverflow.com/questions/41156556/what-exact-command-is-to-install-pm2-on-offline-rhel
21,671,552
How to install Xvfb (X virtual framebuffer) on Redhat 6.5?
I have tried to install the Xvfb on red-hat 6.5 using yum -y install xorg-x11-server-Xvfb but it is not installed and it is giving msg that No package xorg-x11-server-Xvfb available. Error: Nothing to do Plese help me to install Xvfb on Redhat 6.5 to remove the headless exception in the Applet. Thanks.
How to install Xvfb (X virtual framebuffer) on Redhat 6.5? I have tried to install the Xvfb on red-hat 6.5 using yum -y install xorg-x11-server-Xvfb but it is not installed and it is giving msg that No package xorg-x11-server-Xvfb available. Error: Nothing to do Plese help me to install Xvfb on Redhat 6.5 to remove the headless exception in the Applet. Thanks.
linux, installation, redhat
14
49,213
2
https://stackoverflow.com/questions/21671552/how-to-install-xvfb-x-virtual-framebuffer-on-redhat-6-5
31,523,030
Where is javac after installing new openjdk?
An additional jdk was installed and configured on RHEL5. yum install java-1.7.0-openjdk.x86_64 update-alternatives It appeared to work: java -version points to desired 1.7. However, javac -version still points to old 1.6. sudo update-alternatives --config javac only lists one option. I could not find the additional javac . How do I install or configure a 1.7 javac ?
Where is javac after installing new openjdk? An additional jdk was installed and configured on RHEL5. yum install java-1.7.0-openjdk.x86_64 update-alternatives It appeared to work: java -version points to desired 1.7. However, javac -version still points to old 1.6. sudo update-alternatives --config javac only lists one option. I could not find the additional javac . How do I install or configure a 1.7 javac ?
java, redhat, javac, rhel5
14
19,632
2
https://stackoverflow.com/questions/31523030/where-is-javac-after-installing-new-openjdk
58,616,161
postcss-svgo: TypeError: Cannot set property &#39;multipassCount&#39; of undefined (Gatsby)
On a Gatsby 2.17.6 project, when building: Building production JavaScript and CSS bundles [==== ] 1.940 s 1/6 17% run queries failed Building production JavaScript and CSS bundles - 75.519s ERROR #98123 WEBPACK Generating JavaScript bundles failed postcss-svgo: TypeError: Cannot set property 'multipassCount' of undefined not finished run queries - 77.639s npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! gatsby-starter-default@1.0.0 build: node node_modules/gatsby/dist/bin/gatsby.js build` npm ERR! Exit status 1 These are some of my dependencies: "dependencies": { "babel-plugin-styled-components": "^1.8.0", : "gatsby": "^2.0.19", "gatsby-plugin-favicon": "^3.1.4", "gatsby-plugin-google-fonts": "0.0.4", "gatsby-plugin-offline": "^2.0.5", "gatsby-plugin-react-helmet": "^3.0.0", "gatsby-plugin-styled-components": "^3.0.1", : "react": "^16.5.1", "react-dom": "^16.5.1", "react-helmet": "^5.2.0", "react-leaflet": "^2.1.1", "styled-components": "^4.1.1" } I don't see any configurations about postcss on gatsby-config.js, I guess it's a default behaviour of Gatsby. npm ls postcss-svgo throw this: gatsby-starter-default@1.0.0 /<app>/source └─┬ gatsby@2.17.6 └─┬ optimize-css-assets-webpack-plugin@5.0.3 └─┬ cssnano@4.1.10 └─┬ cssnano-preset-default@4.0.7 └── postcss-svgo@4.0.2 I wouldn't mind to disable postcss-svgo if that's a solution, but I don't know how.
postcss-svgo: TypeError: Cannot set property &#39;multipassCount&#39; of undefined (Gatsby) On a Gatsby 2.17.6 project, when building: Building production JavaScript and CSS bundles [==== ] 1.940 s 1/6 17% run queries failed Building production JavaScript and CSS bundles - 75.519s ERROR #98123 WEBPACK Generating JavaScript bundles failed postcss-svgo: TypeError: Cannot set property 'multipassCount' of undefined not finished run queries - 77.639s npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! gatsby-starter-default@1.0.0 build: node node_modules/gatsby/dist/bin/gatsby.js build` npm ERR! Exit status 1 These are some of my dependencies: "dependencies": { "babel-plugin-styled-components": "^1.8.0", : "gatsby": "^2.0.19", "gatsby-plugin-favicon": "^3.1.4", "gatsby-plugin-google-fonts": "0.0.4", "gatsby-plugin-offline": "^2.0.5", "gatsby-plugin-react-helmet": "^3.0.0", "gatsby-plugin-styled-components": "^3.0.1", : "react": "^16.5.1", "react-dom": "^16.5.1", "react-helmet": "^5.2.0", "react-leaflet": "^2.1.1", "styled-components": "^4.1.1" } I don't see any configurations about postcss on gatsby-config.js, I guess it's a default behaviour of Gatsby. npm ls postcss-svgo throw this: gatsby-starter-default@1.0.0 /<app>/source └─┬ gatsby@2.17.6 └─┬ optimize-css-assets-webpack-plugin@5.0.3 └─┬ cssnano@4.1.10 └─┬ cssnano-preset-default@4.0.7 └── postcss-svgo@4.0.2 I wouldn't mind to disable postcss-svgo if that's a solution, but I don't know how.
node.js, webpack, redhat, gatsby, postcss
14
3,512
4
https://stackoverflow.com/questions/58616161/postcss-svgo-typeerror-cannot-set-property-multipasscount-of-undefined-gats
21,742,227
RedHat daemon function usage
I'm working on an init script for Jetty on RHEL. Trying to use the daemon function provided by the init library ( /etc/rc.d/init.d/functions ). I found this terse documentation , and an online example (I've also been looking at other init scripts on the system for examples). Look at this snippet from online to start the daemon daemon --user="$DAEMON_USER" --pidfile="$PIDFILE" "$DAEMON $DAEMON_ARGS &" RETVAL=$? pid=ps -A | grep $NAME | cut -d" " -f2 pid=echo $pid | cut -d" " -f2 if [ -n "$pid" ]; then echo $pid > "$PIDFILE" fi Why bother looking up the $PID and writing it to the $PIDFILE by hand? I guess I'm wondering what the point of the --pidfile option to the daemon function is.
RedHat daemon function usage I'm working on an init script for Jetty on RHEL. Trying to use the daemon function provided by the init library ( /etc/rc.d/init.d/functions ). I found this terse documentation , and an online example (I've also been looking at other init scripts on the system for examples). Look at this snippet from online to start the daemon daemon --user="$DAEMON_USER" --pidfile="$PIDFILE" "$DAEMON $DAEMON_ARGS &" RETVAL=$? pid=ps -A | grep $NAME | cut -d" " -f2 pid=echo $pid | cut -d" " -f2 if [ -n "$pid" ]; then echo $pid > "$PIDFILE" fi Why bother looking up the $PID and writing it to the $PIDFILE by hand? I guess I'm wondering what the point of the --pidfile option to the daemon function is.
linux, bash, daemon, redhat, init
14
30,565
1
https://stackoverflow.com/questions/21742227/redhat-daemon-function-usage
14,400,595
Java OracleDB connection taking too long the first time
I'm having a problem when connecting to an Oracle database, it takes a long time (about ~5 minutes) and it sends the below shown exception. Most of the time, after the first error, the next connections for the same process work correctly. It is a RHEL 6 machine, with two different network interfaces and ip addresses. NOTE: I am not using an url like: "jdbc:oracle:thin:@xxxx:yyy, it is actually: "jdbc:oracle:thin:@xxxx:yyyy:zzz. The SID is not missing, sorry for that :( This is roughly what I've isolated: bin/java -classpath ojdbc6_g.jar -Djavax.net.debug=all -Djava.util.logging.config.file=logging.properties Class.forName ("oracle.jdbc.OracleDriver") DriverManager.getConnection("jdbc:oracle:thin:@xxxx:yyyy", "aaaa", "bbbb") Error StackTrace: java.sql.SQLRecoverableException: IO Error: Connection reset at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:533) at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:557) at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:233) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:29) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:556) at java.sql.DriverManager.getConnection(DriverManager.java:579) at java.sql.DriverManager.getConnection(DriverManager.java:221) at test.jdbc.Main(Test.java:120) Caused by: java.net.SocketException: Connection reset at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113) at java.net.SocketOutputStream.write(SocketOutputStream.java:153) at oracle.net.ns.DataPacket.send(DataPacket.java:248) at oracle.net.ns.NetOutputStream.flush(NetOutputStream.java:227) at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:309) at oracle.net.ns.NetInputStream.read(NetInputStream.java:257) at oracle.net.ns.NetInputStream.read(NetInputStream.java:182) at oracle.net.ns.NetInputStream.read(NetInputStream.java:99) at oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:121) at oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:77) at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1173) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:309) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:200) at oracle.jdbc.driver.T4CTTIoauthenticate.doOSESSKEY(T4CTTIoauthenticate.java:404) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:430) ... 35 more There's a very verbose log of what happens over here: [URL] The line that says GET STUCK HERE represents the 5 minute waiting time
Java OracleDB connection taking too long the first time I'm having a problem when connecting to an Oracle database, it takes a long time (about ~5 minutes) and it sends the below shown exception. Most of the time, after the first error, the next connections for the same process work correctly. It is a RHEL 6 machine, with two different network interfaces and ip addresses. NOTE: I am not using an url like: "jdbc:oracle:thin:@xxxx:yyy, it is actually: "jdbc:oracle:thin:@xxxx:yyyy:zzz. The SID is not missing, sorry for that :( This is roughly what I've isolated: bin/java -classpath ojdbc6_g.jar -Djavax.net.debug=all -Djava.util.logging.config.file=logging.properties Class.forName ("oracle.jdbc.OracleDriver") DriverManager.getConnection("jdbc:oracle:thin:@xxxx:yyyy", "aaaa", "bbbb") Error StackTrace: java.sql.SQLRecoverableException: IO Error: Connection reset at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:533) at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:557) at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:233) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:29) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:556) at java.sql.DriverManager.getConnection(DriverManager.java:579) at java.sql.DriverManager.getConnection(DriverManager.java:221) at test.jdbc.Main(Test.java:120) Caused by: java.net.SocketException: Connection reset at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113) at java.net.SocketOutputStream.write(SocketOutputStream.java:153) at oracle.net.ns.DataPacket.send(DataPacket.java:248) at oracle.net.ns.NetOutputStream.flush(NetOutputStream.java:227) at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:309) at oracle.net.ns.NetInputStream.read(NetInputStream.java:257) at oracle.net.ns.NetInputStream.read(NetInputStream.java:182) at oracle.net.ns.NetInputStream.read(NetInputStream.java:99) at oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:121) at oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:77) at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1173) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:309) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:200) at oracle.jdbc.driver.T4CTTIoauthenticate.doOSESSKEY(T4CTTIoauthenticate.java:404) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:430) ... 35 more There's a very verbose log of what happens over here: [URL] The line that says GET STUCK HERE represents the 5 minute waiting time
java, oracle11g, redhat
13
9,486
2
https://stackoverflow.com/questions/14400595/java-oracledb-connection-taking-too-long-the-first-time
43,265,767
Difference between noarch rpm and a rpm
Can someone explain difference between noarch rpm and rpm. Is these two are dependents. I have Jenkins rpm and there are some noarch rpm too. what I can do with noarch rpm. Thanks for your help
Difference between noarch rpm and a rpm Can someone explain difference between noarch rpm and rpm. Is these two are dependents. I have Jenkins rpm and there are some noarch rpm too. what I can do with noarch rpm. Thanks for your help
linux, centos, operating-system, redhat, rpm
13
22,100
1
https://stackoverflow.com/questions/43265767/difference-between-noarch-rpm-and-a-rpm
24,676,687
top &#39;xterm&#39;: unknown terminal type
I have an error when run TOP command: >top 'xterm': unknown terminal type. > echo $TERM xterm > echo $DISPLAY DYSPLAY: Undefined variable. > cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.3 (Santiago) > ls /usr/share/terminfo/ 1 2 3 4 5 6 7 8 9 a A b c d e E f g h i j k l L m M n N o p P q Q r s t u v w x X z > ls /usr/share/terminfo/x/xterm /usr/share/terminfo/x/xterm i have that problem also with Root. does TOP use xterm? How can i do?
top &#39;xterm&#39;: unknown terminal type I have an error when run TOP command: >top 'xterm': unknown terminal type. > echo $TERM xterm > echo $DISPLAY DYSPLAY: Undefined variable. > cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.3 (Santiago) > ls /usr/share/terminfo/ 1 2 3 4 5 6 7 8 9 a A b c d e E f g h i j k l L m M n N o p P q Q r s t u v w x X z > ls /usr/share/terminfo/x/xterm /usr/share/terminfo/x/xterm i have that problem also with Root. does TOP use xterm? How can i do?
linux, shell, command, redhat, terminfo
13
32,915
3
https://stackoverflow.com/questions/24676687/top-xterm-unknown-terminal-type
51,745,010
ldap_modify: Other (e.g., implementation specific) error (80)
I followed RHEL7: Configure a LDAP directory service for user connection to configure openldap on CentOS Linux release 7. First I create the /etc/openldap/changes.ldif file and paste the content with replacing the password of course with the previously created password. Then I get to send the new configuration to the slapd server using the command # ldapmodify -Y EXTERNAL -H ldapi:/// -f /etc/openldap/changes.ldif Once I do that I get the following error: # ldapmodify -Y EXTERNAL -H ldapi:/// -f /etc/openldap/changes.ldif SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 modifying entry "olcDatabase={2}hdb,cn=config" modifying entry "olcDatabase={2}hdb,cn=config" modifying entry "olcDatabase={2}hdb,cn=config" modifying entry "cn=config" ldap_modify: Other (e.g., implementation specific) error (80) All the files are readable for the user slapd is running as. What's wrong there? I couldn't find anything useful to feed SEARCHENGINE with. It's been a while that I've been looking for a solution but at the moment all what I found is two people Re: Error 80 with ldapmodify ldap_modify: Other (e.g., implementation specific) error (80) Having the same problem and asking the same question but no answers.
ldap_modify: Other (e.g., implementation specific) error (80) I followed RHEL7: Configure a LDAP directory service for user connection to configure openldap on CentOS Linux release 7. First I create the /etc/openldap/changes.ldif file and paste the content with replacing the password of course with the previously created password. Then I get to send the new configuration to the slapd server using the command # ldapmodify -Y EXTERNAL -H ldapi:/// -f /etc/openldap/changes.ldif Once I do that I get the following error: # ldapmodify -Y EXTERNAL -H ldapi:/// -f /etc/openldap/changes.ldif SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 modifying entry "olcDatabase={2}hdb,cn=config" modifying entry "olcDatabase={2}hdb,cn=config" modifying entry "olcDatabase={2}hdb,cn=config" modifying entry "cn=config" ldap_modify: Other (e.g., implementation specific) error (80) All the files are readable for the user slapd is running as. What's wrong there? I couldn't find anything useful to feed SEARCHENGINE with. It's been a while that I've been looking for a solution but at the moment all what I found is two people Re: Error 80 with ldapmodify ldap_modify: Other (e.g., implementation specific) error (80) Having the same problem and asking the same question but no answers.
ldap, redhat, centos7, rhel, slapd
13
28,771
1
https://stackoverflow.com/questions/51745010/ldap-modify-other-e-g-implementation-specific-error-80
39,464,203
sed + how to append lines with indent
I use the following sed command in order to append the lines: rotate 1 size 1k after the word missingok the little esthetic problem is that "rotate 1" isn’t alignment like the other lines # sed '/missingok/a rotate 1\n size 1k' /etc/logrotate.d/httpd /var/log/httpd/*log { missingok rotate 1 size 1k notifempty sharedscripts delaycompress postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript } someone have advice how to indent the string "rotate 1" under missingok string ? the original file /var/log/httpd/*log { missingok notifempty sharedscripts delaycompress postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript }
sed + how to append lines with indent I use the following sed command in order to append the lines: rotate 1 size 1k after the word missingok the little esthetic problem is that "rotate 1" isn’t alignment like the other lines # sed '/missingok/a rotate 1\n size 1k' /etc/logrotate.d/httpd /var/log/httpd/*log { missingok rotate 1 size 1k notifempty sharedscripts delaycompress postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript } someone have advice how to indent the string "rotate 1" under missingok string ? the original file /var/log/httpd/*log { missingok notifempty sharedscripts delaycompress postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript }
linux, sed, redhat
13
9,823
3
https://stackoverflow.com/questions/39464203/sed-how-to-append-lines-with-indent
65,947,327
Ansible &#39;no_log&#39; for specific values in debug output, not entire module
I am studying for the RedHat Certified Specialist in Ansible Automation (EX407) and I'm playing around with the no_log module parameter. I have a sample playbook structured as so; --- - hosts: webservers tasks: - name: Query vCenter vmware_guest: hostname: "{{ vcenter['host'] }}" username: "{{ vcenter['username'] }}" password: "{{ vcenter['password'] }}" name: "{{ inventory_hostname }}" validate_certs: no delegate_to: localhost no_log: yes ... When no_log is disabled, I get a lot of helpful debug information about my VM, but when no_log is disabled I obviously can't protect my playbooks vaulted data (in this case that is the vcenter['username'] and vcenter['password'] values). Enabling no_log cripples my playbooks debug output to just; "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", I would like to know how it is possible to censor only some of the debug output. I know this is possible because vcenter['password'] is protected in it's output regardless of my no_log state. I see this in the verbose output when no_log is disabled; "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "administrator@vsphere.local" } } What are your thoughts?
Ansible &#39;no_log&#39; for specific values in debug output, not entire module I am studying for the RedHat Certified Specialist in Ansible Automation (EX407) and I'm playing around with the no_log module parameter. I have a sample playbook structured as so; --- - hosts: webservers tasks: - name: Query vCenter vmware_guest: hostname: "{{ vcenter['host'] }}" username: "{{ vcenter['username'] }}" password: "{{ vcenter['password'] }}" name: "{{ inventory_hostname }}" validate_certs: no delegate_to: localhost no_log: yes ... When no_log is disabled, I get a lot of helpful debug information about my VM, but when no_log is disabled I obviously can't protect my playbooks vaulted data (in this case that is the vcenter['username'] and vcenter['password'] values). Enabling no_log cripples my playbooks debug output to just; "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", I would like to know how it is possible to censor only some of the debug output. I know this is possible because vcenter['password'] is protected in it's output regardless of my no_log state. I see this in the verbose output when no_log is disabled; "invocation": { "module_args": { "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "administrator@vsphere.local" } } What are your thoughts?
automation, ansible, yaml, redhat, vmware
13
29,563
1
https://stackoverflow.com/questions/65947327/ansible-no-log-for-specific-values-in-debug-output-not-entire-module
17,337,749
puppet log file in redhat and centos
I am running puppet agent in CentOS and Redhat. I would like to see its log file but cannot find it. In these operating systems, I clearly specify logdir = /var/log/puppet in the puppet.conf, but upon checking this directory, it is empty. Note that I did similar thing for Ubuntu and SUSE and it worked well. The issue only happened in Redhat and CentOS. Any idea of where to look for the log file in these cases? Thanks, Henry
puppet log file in redhat and centos I am running puppet agent in CentOS and Redhat. I would like to see its log file but cannot find it. In these operating systems, I clearly specify logdir = /var/log/puppet in the puppet.conf, but upon checking this directory, it is empty. Note that I did similar thing for Ubuntu and SUSE and it worked well. The issue only happened in Redhat and CentOS. Any idea of where to look for the log file in these cases? Thanks, Henry
logging, centos, redhat, puppet
13
32,403
2
https://stackoverflow.com/questions/17337749/puppet-log-file-in-redhat-and-centos
9,741,574
RedHat 6/Oracle Linux 6 is not allowing key authentication via ssh
Keys are properly deployed in ~/.ssh/authorized_keys Yet ssh keeps on prompting for a password.
RedHat 6/Oracle Linux 6 is not allowing key authentication via ssh Keys are properly deployed in ~/.ssh/authorized_keys Yet ssh keeps on prompting for a password.
redhat, selinux, sshd, oracle-enterprise-linux
12
22,097
4
https://stackoverflow.com/questions/9741574/redhat-6-oracle-linux-6-is-not-allowing-key-authentication-via-ssh
97,142
Ruby on Rails: no such file to load -- openssl on RedHat Linux Enterprise
I am trying to do 'rake db:migrate' and getting the error message 'no such file to load -- openssl'. Both 'openssl' and 'openssl-devel' packages are installed. Others on Debian or Ubuntu seem to be able to get rid of this by installing 'libopenssl-ruby', which is not available for RedHat. Has anybody run into this and have a solution for it?
Ruby on Rails: no such file to load -- openssl on RedHat Linux Enterprise I am trying to do 'rake db:migrate' and getting the error message 'no such file to load -- openssl'. Both 'openssl' and 'openssl-devel' packages are installed. Others on Debian or Ubuntu seem to be able to get rid of this by installing 'libopenssl-ruby', which is not available for RedHat. Has anybody run into this and have a solution for it?
ruby-on-rails, ruby, openssl, rake, redhat
12
14,078
5
https://stackoverflow.com/questions/97142/ruby-on-rails-no-such-file-to-load-openssl-on-redhat-linux-enterprise
8,854,882
Why does service stop after RPM is updated
I have a software package for which I created an RPM. I can't paste the entire RPM here for IP reasons, but here is the gist of the problem: %pre /sbin/pidof program if [ "$?" -eq "0" ] then /sbin/service program stop fi %post /sbin/chkconfig program on /sbin/service program start %preun /sbin/service program stop /sbin/chkconfig program off %postun rm -rf /program_folder Everytime I try to upgrade the package, it stops the program service, installs everything, starts the service, and then stops it again and deletes the folder...any ideas?
Why does service stop after RPM is updated I have a software package for which I created an RPM. I can't paste the entire RPM here for IP reasons, but here is the gist of the problem: %pre /sbin/pidof program if [ "$?" -eq "0" ] then /sbin/service program stop fi %post /sbin/chkconfig program on /sbin/service program start %preun /sbin/service program stop /sbin/chkconfig program off %postun rm -rf /program_folder Everytime I try to upgrade the package, it stops the program service, installs everything, starts the service, and then stops it again and deletes the folder...any ideas?
redhat, rpm
12
6,739
1
https://stackoverflow.com/questions/8854882/why-does-service-stop-after-rpm-is-updated
23,215,710
error: could not find function install_github for R version 2.15.2
I'm having multiple problems with R right now but I want to start asking one of the most fundamental questions. I want to install GitHub files into R, but for some reason the install_github function doesn't seem to exist. For example, when I type: install_github("devtools") I get error: could not find function install_github The install_packages function worked perfectly fine. How can I solve this problem? To add, I want to ask whether there is a way to upgrade R, since version 2.15.2 doesn't seem to be compatible for most of the packages I want to work with. I'm currently using Linux version 3.6.11-1 RedHat 4.7.2-2 fedora linux 17.0 x86-64. I checked the CRAN website but they seemed to have the most unupdated versions of R (if that is even possible) that dates all the way back to '09. I would seriously love to update myself from this old version of R. Any advice on this too?
error: could not find function install_github for R version 2.15.2 I'm having multiple problems with R right now but I want to start asking one of the most fundamental questions. I want to install GitHub files into R, but for some reason the install_github function doesn't seem to exist. For example, when I type: install_github("devtools") I get error: could not find function install_github The install_packages function worked perfectly fine. How can I solve this problem? To add, I want to ask whether there is a way to upgrade R, since version 2.15.2 doesn't seem to be compatible for most of the packages I want to work with. I'm currently using Linux version 3.6.11-1 RedHat 4.7.2-2 fedora linux 17.0 x86-64. I checked the CRAN website but they seemed to have the most unupdated versions of R (if that is even possible) that dates all the way back to '09. I would seriously love to update myself from this old version of R. Any advice on this too?
linux, r, redhat, devtools
12
29,836
2
https://stackoverflow.com/questions/23215710/error-could-not-find-function-install-github-for-r-version-2-15-2
79,479,879
Avoiding strcpy overflow destination warning
With a structure such as the following typedef struct { size_t StringLength; char String[1]; } mySTRING; and use of this structure along these lines mySTRING * CreateString(char * Input) { size_t Len = strlen(Input); int Needed = sizeof(mySTRING) + Len; mySTRING * pString = malloc(Needed); : strcpy(pString->String, Input); } results, on Red Hat Linux cc compiler, in the following warning, which is fair enough. strings.c:59:3: warning: âstrcpyâ writing 14 bytes into a region of size 1 overflows the destination [-Wstringop-overflow=] strcpy(pString->String, Input); I know that, in this instance of code, this warning is something I don't need to correct. How can I tell the compiler this without turning off these warnings which might usefully find something, somewhere else, in the future. What changes can I make to the code to show the compiler this one is OK.
Avoiding strcpy overflow destination warning With a structure such as the following typedef struct { size_t StringLength; char String[1]; } mySTRING; and use of this structure along these lines mySTRING * CreateString(char * Input) { size_t Len = strlen(Input); int Needed = sizeof(mySTRING) + Len; mySTRING * pString = malloc(Needed); : strcpy(pString->String, Input); } results, on Red Hat Linux cc compiler, in the following warning, which is fair enough. strings.c:59:3: warning: âstrcpyâ writing 14 bytes into a region of size 1 overflows the destination [-Wstringop-overflow=] strcpy(pString->String, Input); I know that, in this instance of code, this warning is something I don't need to correct. How can I tell the compiler this without turning off these warnings which might usefully find something, somewhere else, in the future. What changes can I make to the code to show the compiler this one is OK.
c, linux, redhat, compiler-warnings, cc
12
408
1
https://stackoverflow.com/questions/79479879/avoiding-strcpy-overflow-destination-warning
23,285,339
What is JBPM? Why use it?
I am java developer. I am developing a new application. In this application am going to integrate JBPM, spring and hibernate also. So please, answer my below questions, what is JBPM? Why use it? What is workflow engine? please give any example. Thanks for your answer.
What is JBPM? Why use it? I am java developer. I am developing a new application. In this application am going to integrate JBPM, spring and hibernate also. So please, answer my below questions, what is JBPM? Why use it? What is workflow engine? please give any example. Thanks for your answer.
java, jboss, frameworks, redhat, jbpm
12
16,634
2
https://stackoverflow.com/questions/23285339/what-is-jbpm-why-use-it
61,662,403
microdnf update command installs new packages instead of just updating existing packages
My Dockerfile uses base image registry.access.redhat.com/ubi8/ubi-minimal which has microdnf package manager. When I include following snippet in docker file to have latest updates on existing packages, RUN true \ && microdnf clean all \ && microdnf update --nodocs \ && microdnf clean all \ && true It's not just upgrades 4 existing packages but also install 33 new packages, Transaction Summary: Installing: 33 packages Reinstalling: 0 packages Upgrading: 4 packages Removing: 0 packages Downgrading: 0 packages The dnf documentation does not suggest that it should install new packages. Is it a bug in microdnf ? microdnf update also increases the new image size by ~75MB
microdnf update command installs new packages instead of just updating existing packages My Dockerfile uses base image registry.access.redhat.com/ubi8/ubi-minimal which has microdnf package manager. When I include following snippet in docker file to have latest updates on existing packages, RUN true \ && microdnf clean all \ && microdnf update --nodocs \ && microdnf clean all \ && true It's not just upgrades 4 existing packages but also install 33 new packages, Transaction Summary: Installing: 33 packages Reinstalling: 0 packages Upgrading: 4 packages Removing: 0 packages Downgrading: 0 packages The dnf documentation does not suggest that it should install new packages. Is it a bug in microdnf ? microdnf update also increases the new image size by ~75MB
dockerfile, redhat, dnf, ubi
12
29,283
1
https://stackoverflow.com/questions/61662403/microdnf-update-command-installs-new-packages-instead-of-just-updating-existing
41,810,222
&quot;pure virtual function called&quot; on gcc 4.4 but not on newer version or clang 3.4
I've got an MCVE which, on some of my machines crashes when compiled with g++ version 4.4.7 but does work with clang++ version 3.4.2 and g++ version 6.3. I'd like some help to know if it comes from undefined behavior or from an actual bug of this ancient version of gcc. Code #include <cstdlib> class BaseType { public: BaseType() : _present( false ) {} virtual ~BaseType() {} virtual void clear() {} virtual void setString(const char* value, const char* fieldName) { _present = (*value != '\0'); } protected: virtual void setStrNoCheck(const char* value) = 0; protected: bool _present; }; // ---------------------------------------------------------------------------------- class TypeTextFix : public BaseType { public: virtual void clear() {} virtual void setString(const char* value, const char* fieldName) { clear(); BaseType::setString(value, fieldName); if( _present == false ) { return; // commenting this return fix the crash. Yes it does! } setStrNoCheck(value); } protected: virtual void setStrNoCheck(const char* value) {} }; // ---------------------------------------------------------------------------------- struct Wrapper { TypeTextFix _text; }; int main() { { Wrapper wrapped; wrapped._text.setString("123456789012", NULL); } // if I add a write to stdout here, it does not crash oO { Wrapper wrapped; wrapped._text.setString("123456789012", NULL); // without this line (or any one), the program runs just fine! } } Compile & run g++ -O1 -Wall -Werror thebug.cpp && ./a.out pure virtual method called terminate called without an active exception Aborted (core dumped) This is actually minimal, if one removes any feature of this code, it runs correctly. Analyse The code snippet works fine when compiled with -O0 , BUT it still works fine when compiled with -O0 +flag for every flag of -O1 as defined on GnuCC documentation . A core dump is generated from which one can extract the backtrace: (gdb) bt #0 0x0000003f93e32625 in raise () from /lib64/libc.so.6 #1 0x0000003f93e33e05 in abort () from /lib64/libc.so.6 #2 0x0000003f98ebea7d in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib64/libstdc++.so.6 #3 0x0000003f98ebcbd6 in ?? () from /usr/lib64/libstdc++.so.6 #4 0x0000003f98ebcc03 in std::terminate() () from /usr/lib64/libstdc++.so.6 #5 0x0000003f98ebd55f in __cxa_pure_virtual () from /usr/lib64/libstdc++.so.6 #6 0x00000000004007b6 in main () Feel free to ask for tests or details in the comments. Asked: Is it the actual code? Yes! it is! byte for byte. I've checked and rechecked. What exact version of GnuCC du you use? $ g++ --version g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16) Copyright (C) 2010 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Can we see the generated assembly? Yes, here it is on pastebin.com
&quot;pure virtual function called&quot; on gcc 4.4 but not on newer version or clang 3.4 I've got an MCVE which, on some of my machines crashes when compiled with g++ version 4.4.7 but does work with clang++ version 3.4.2 and g++ version 6.3. I'd like some help to know if it comes from undefined behavior or from an actual bug of this ancient version of gcc. Code #include <cstdlib> class BaseType { public: BaseType() : _present( false ) {} virtual ~BaseType() {} virtual void clear() {} virtual void setString(const char* value, const char* fieldName) { _present = (*value != '\0'); } protected: virtual void setStrNoCheck(const char* value) = 0; protected: bool _present; }; // ---------------------------------------------------------------------------------- class TypeTextFix : public BaseType { public: virtual void clear() {} virtual void setString(const char* value, const char* fieldName) { clear(); BaseType::setString(value, fieldName); if( _present == false ) { return; // commenting this return fix the crash. Yes it does! } setStrNoCheck(value); } protected: virtual void setStrNoCheck(const char* value) {} }; // ---------------------------------------------------------------------------------- struct Wrapper { TypeTextFix _text; }; int main() { { Wrapper wrapped; wrapped._text.setString("123456789012", NULL); } // if I add a write to stdout here, it does not crash oO { Wrapper wrapped; wrapped._text.setString("123456789012", NULL); // without this line (or any one), the program runs just fine! } } Compile & run g++ -O1 -Wall -Werror thebug.cpp && ./a.out pure virtual method called terminate called without an active exception Aborted (core dumped) This is actually minimal, if one removes any feature of this code, it runs correctly. Analyse The code snippet works fine when compiled with -O0 , BUT it still works fine when compiled with -O0 +flag for every flag of -O1 as defined on GnuCC documentation . A core dump is generated from which one can extract the backtrace: (gdb) bt #0 0x0000003f93e32625 in raise () from /lib64/libc.so.6 #1 0x0000003f93e33e05 in abort () from /lib64/libc.so.6 #2 0x0000003f98ebea7d in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib64/libstdc++.so.6 #3 0x0000003f98ebcbd6 in ?? () from /usr/lib64/libstdc++.so.6 #4 0x0000003f98ebcc03 in std::terminate() () from /usr/lib64/libstdc++.so.6 #5 0x0000003f98ebd55f in __cxa_pure_virtual () from /usr/lib64/libstdc++.so.6 #6 0x00000000004007b6 in main () Feel free to ask for tests or details in the comments. Asked: Is it the actual code? Yes! it is! byte for byte. I've checked and rechecked. What exact version of GnuCC du you use? $ g++ --version g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16) Copyright (C) 2010 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Can we see the generated assembly? Yes, here it is on pastebin.com
c++, g++, redhat, undefined-behavior
12
1,269
2
https://stackoverflow.com/questions/41810222/pure-virtual-function-called-on-gcc-4-4-but-not-on-newer-version-or-clang-3-4
41,467,908
CentOS 7 + PHP7 -- php not rendering in browser
I have a clean install of apache/httpd and php7.1.0 running on CentOS 7. When I execute from the command line: php -v I get the expected response: PHP 7.1.0 (cli) (built: Dec 1 2016 08:13:15) ( NTS ) Copyright (c) 1997-2016 The PHP Group Zend Engine v3.1.0-dev, Copyright (c) 1998-2016 Zend Technologies But when I try to hit my phpinfo.php page, all I get is... <?php phpinfo(); ?> literally outputted to the screen - can someone tell me what I'm missing, did I forget to enable a mod?
CentOS 7 + PHP7 -- php not rendering in browser I have a clean install of apache/httpd and php7.1.0 running on CentOS 7. When I execute from the command line: php -v I get the expected response: PHP 7.1.0 (cli) (built: Dec 1 2016 08:13:15) ( NTS ) Copyright (c) 1997-2016 The PHP Group Zend Engine v3.1.0-dev, Copyright (c) 1998-2016 Zend Technologies But when I try to hit my phpinfo.php page, all I get is... <?php phpinfo(); ?> literally outputted to the screen - can someone tell me what I'm missing, did I forget to enable a mod?
php, apache, centos, redhat, php-7
12
40,015
6
https://stackoverflow.com/questions/41467908/centos-7-php7-php-not-rendering-in-browser
8,267,437
Amazon Linux vs Red Hat Linux
I have developed a web service(using ruby/sinatra/sqs) which runs on Linux Red Hat. I am planning to move this on a EC2 instance. I see that Amazon provides a linux version of its own. Is there any reason why I should use Amazon Linux on EC2 instead of Red Hat?
Amazon Linux vs Red Hat Linux I have developed a web service(using ruby/sinatra/sqs) which runs on Linux Red Hat. I am planning to move this on a EC2 instance. I see that Amazon provides a linux version of its own. Is there any reason why I should use Amazon Linux on EC2 instead of Red Hat?
linux, amazon-ec2, redhat
12
10,436
1
https://stackoverflow.com/questions/8267437/amazon-linux-vs-red-hat-linux
55,363,823
Redhat/CentOS - `GLIBC_2.18&#39; not found
I was trying to run redis server (on a CentOS server) with specific module: redis-server --loadmodule ./redisql_v0.9.1_x86_64.so and getting error: Module ./redisql_v0.9.1_x86_64.so failed to load: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by ./redisql_v0.9.1_x86_64.so) this is the linux version: uname Linux cat /etc/*release CentOS Linux release 7.6.1810 (Core) NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="[URL] BUG_REPORT_URL="[URL] CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" CentOS Linux release 7.6.1810 (Core) CentOS Linux release 7.6.1810 (Core) Also this is what is showing for /lib64/libc.so.6 : /lib64/libc.so.6 GNU C Library (GNU libc) stable release version 2.17, by Roland McGrath et al. Copyright (C) 2012 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Compiled by GNU CC version 4.8.5 20150623 (Red Hat 4.8.5-36). Compiled on a Linux 3.10.0 system on 2019-01-29. Available extensions: The C stubs add-on version 2.1.2. crypt add-on version 2.1 by Michael Glad and others GNU Libidn by Simon Josefsson Native POSIX Threads Library by Ulrich Drepper et al BIND-8.2.3-T5B RT using linux kernel aio libc ABIs: UNIQUE IFUNC For bug reporting instructions, please see: <[URL] Also: rpm -qa | grep glibc glibc-common-2.17-260.el7_6.3.x86_64 glibc-devel-2.17-260.el7_6.3.x86_64 glibc-2.17-260.el7_6.3.x86_64 glibc-headers-2.17-260.el7_6.3.x86_64 Tried as well: yum install glibc* -y Loaded plugins: fastestmirror, ovl Loading mirror speeds from cached hostfile * base: repos-va.psychz.net * extras: repos-va.psychz.net * updates: repos-va.psychz.net Package glibc-devel-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-utils-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-headers-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-static-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-common-2.17-260.el7_6.3.x86_64 already installed and latest version Nothing to do What is the process of installing/setting GLIBC_2.18 on Centos/Redhat servers? Thanks..
Redhat/CentOS - `GLIBC_2.18&#39; not found I was trying to run redis server (on a CentOS server) with specific module: redis-server --loadmodule ./redisql_v0.9.1_x86_64.so and getting error: Module ./redisql_v0.9.1_x86_64.so failed to load: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by ./redisql_v0.9.1_x86_64.so) this is the linux version: uname Linux cat /etc/*release CentOS Linux release 7.6.1810 (Core) NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="[URL] BUG_REPORT_URL="[URL] CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" CentOS Linux release 7.6.1810 (Core) CentOS Linux release 7.6.1810 (Core) Also this is what is showing for /lib64/libc.so.6 : /lib64/libc.so.6 GNU C Library (GNU libc) stable release version 2.17, by Roland McGrath et al. Copyright (C) 2012 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Compiled by GNU CC version 4.8.5 20150623 (Red Hat 4.8.5-36). Compiled on a Linux 3.10.0 system on 2019-01-29. Available extensions: The C stubs add-on version 2.1.2. crypt add-on version 2.1 by Michael Glad and others GNU Libidn by Simon Josefsson Native POSIX Threads Library by Ulrich Drepper et al BIND-8.2.3-T5B RT using linux kernel aio libc ABIs: UNIQUE IFUNC For bug reporting instructions, please see: <[URL] Also: rpm -qa | grep glibc glibc-common-2.17-260.el7_6.3.x86_64 glibc-devel-2.17-260.el7_6.3.x86_64 glibc-2.17-260.el7_6.3.x86_64 glibc-headers-2.17-260.el7_6.3.x86_64 Tried as well: yum install glibc* -y Loaded plugins: fastestmirror, ovl Loading mirror speeds from cached hostfile * base: repos-va.psychz.net * extras: repos-va.psychz.net * updates: repos-va.psychz.net Package glibc-devel-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-utils-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-headers-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-static-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-common-2.17-260.el7_6.3.x86_64 already installed and latest version Nothing to do What is the process of installing/setting GLIBC_2.18 on Centos/Redhat servers? Thanks..
linux, redis, centos, redhat, glibc
12
44,285
2
https://stackoverflow.com/questions/55363823/redhat-centos-glibc-2-18-not-found
49,369,065
RHEL: This system is currently not set up to build kernel modules
I am trying to install virtualbox5.2 on a RHEL 7 VM When I try to rebuild kernels modules I get the following error: [root@myserver~]# /usr/lib/virtualbox/vboxdrv.sh setup vboxdrv.sh: Stopping VirtualBox services. vboxdrv.sh: Building VirtualBox kernel modules. This system is currently not set up to build kernel modules. Please install the Linux kernel "header" files matching the current kernel for adding new hardware support to the system. The distribution packages containing the headers are probably: kernel-devel kernel-devel-3.10.0-693.11.1.el7.x86_64 I tried install kernet-devel and got success message Installed: kernel-devel.x86_64 0:3.10.0-693.21.1.el7 Complete! But still the setup fails. Any idea what is missing here?
RHEL: This system is currently not set up to build kernel modules I am trying to install virtualbox5.2 on a RHEL 7 VM When I try to rebuild kernels modules I get the following error: [root@myserver~]# /usr/lib/virtualbox/vboxdrv.sh setup vboxdrv.sh: Stopping VirtualBox services. vboxdrv.sh: Building VirtualBox kernel modules. This system is currently not set up to build kernel modules. Please install the Linux kernel "header" files matching the current kernel for adding new hardware support to the system. The distribution packages containing the headers are probably: kernel-devel kernel-devel-3.10.0-693.11.1.el7.x86_64 I tried install kernet-devel and got success message Installed: kernel-devel.x86_64 0:3.10.0-693.21.1.el7 Complete! But still the setup fails. Any idea what is missing here?
virtualbox, redhat, rhel7
11
61,669
9
https://stackoverflow.com/questions/49369065/rhel-this-system-is-currently-not-set-up-to-build-kernel-modules
8,254,705
Redhat Linux - change directory color
I am using Redhat Linux and the problem I am facing is that the "blue" colour of the directories is hardly visible on the black background. I found some posts on the web which asks to change some settings in the file /etc/profile.d/colorls.sh and /etc/profile.d/colorls.csh . However, this will change the colour settings for everyone who logs into the system. Could someone please let me know how I can change the colour settings that will affect only me?
Redhat Linux - change directory color I am using Redhat Linux and the problem I am facing is that the "blue" colour of the directories is hardly visible on the black background. I found some posts on the web which asks to change some settings in the file /etc/profile.d/colorls.sh and /etc/profile.d/colorls.csh . However, this will change the colour settings for everyone who logs into the system. Could someone please let me know how I can change the colour settings that will affect only me?
linux, bash, shell, unix, redhat
11
31,537
4
https://stackoverflow.com/questions/8254705/redhat-linux-change-directory-color
52,417,318
Why does the free() function not return memory to the operating system?
When I use the top terminal program at Linux, I can't see the result of free. My expectation is: free map and list. The memory usage that I can see at the top(Linux function) or /proc/meminfo get smaller than past. sleep is start. program exit. But The usage of memory only gets smaller when the program ends. Would you explain the logic of free function? Below is my code. for(mapIter = bufMap->begin(); mapIter != bufMap -> end();mapIter++) { list<buff> *buffList = mapIter->second; list<buff>::iterator listIter; for(listIter = buffList->begin(); listIter != buffList->end();listIter++) { free(listIter->argu1); free(listIter->argu2); free(listIter->argu3); } delete buffList; } delete bufMap; printf("Free Complete!\n"); sleep(10); printf("endend\n"); Thanks you.
Why does the free() function not return memory to the operating system? When I use the top terminal program at Linux, I can't see the result of free. My expectation is: free map and list. The memory usage that I can see at the top(Linux function) or /proc/meminfo get smaller than past. sleep is start. program exit. But The usage of memory only gets smaller when the program ends. Would you explain the logic of free function? Below is my code. for(mapIter = bufMap->begin(); mapIter != bufMap -> end();mapIter++) { list<buff> *buffList = mapIter->second; list<buff>::iterator listIter; for(listIter = buffList->begin(); listIter != buffList->end();listIter++) { free(listIter->argu1); free(listIter->argu2); free(listIter->argu3); } delete buffList; } delete bufMap; printf("Free Complete!\n"); sleep(10); printf("endend\n"); Thanks you.
c++, linux, redhat
11
5,560
2
https://stackoverflow.com/questions/52417318/why-does-the-free-function-not-return-memory-to-the-operating-system
27,491,467
How to read file through ssh/scp directly
I have a program written in C/C++ that reads two files and then generate some reports. The typical workflow is as follows: 1> scp user@server01:/temp/file1.txt ~/ then input my password for the prompty 2> my_program file1.txt localfile.txt Is there a way that I can let my program directly handle the remote file without explicitly copying the file to local first? I have tried the following command but it doesn't work for me. > my_program <(ssh user@server01:/temp/file1.txt) localfile.txt
How to read file through ssh/scp directly I have a program written in C/C++ that reads two files and then generate some reports. The typical workflow is as follows: 1> scp user@server01:/temp/file1.txt ~/ then input my password for the prompty 2> my_program file1.txt localfile.txt Is there a way that I can let my program directly handle the remote file without explicitly copying the file to local first? I have tried the following command but it doesn't work for me. > my_program <(ssh user@server01:/temp/file1.txt) localfile.txt
linux, redhat
11
40,856
2
https://stackoverflow.com/questions/27491467/how-to-read-file-through-ssh-scp-directly
44,159,793
Trusted Root Certificates in DotNet Core on Linux (RHEL 7.1)
I'm currently deploying a .net-core web-api to an docker container on rhel 7.1. Everything works as expected, but from my application I need to call other services via https and those hosts use certificates signed by self-maintained root certificates. In this constellation I get ssl-errors while calling this services (ssl-not valid) and therefore I need to install this root-certificate in the docker-container or somehow use the root-certificate in the .net-core application. How can this be done? Is there a best practice to handle this situation? Will .net-core access the right keystore on the rhel-system?
Trusted Root Certificates in DotNet Core on Linux (RHEL 7.1) I'm currently deploying a .net-core web-api to an docker container on rhel 7.1. Everything works as expected, but from my application I need to call other services via https and those hosts use certificates signed by self-maintained root certificates. In this constellation I get ssl-errors while calling this services (ssl-not valid) and therefore I need to install this root-certificate in the docker-container or somehow use the root-certificate in the .net-core application. How can this be done? Is there a best practice to handle this situation? Will .net-core access the right keystore on the rhel-system?
ssl, ssl-certificate, .net-core, redhat, root-certificate
11
11,423
1
https://stackoverflow.com/questions/44159793/trusted-root-certificates-in-dotnet-core-on-linux-rhel-7-1
11,228,078
How do I get libpam.so.0 (32 bit) on my 64bit RHEL6?
I am trying to install DB2 Enterprise Server on my RHEL6 machine. Unfortunately, it seems that it needs the 32bit version of libpam.so.0 for some routines. The machine runs the 64 bit version which seems to have the lib installed... I assume it's the 64 version. Is there any way to get and install the 32 bit version to be used by the DB2 installer?
How do I get libpam.so.0 (32 bit) on my 64bit RHEL6? I am trying to install DB2 Enterprise Server on my RHEL6 machine. Unfortunately, it seems that it needs the 32bit version of libpam.so.0 for some routines. The machine runs the 64 bit version which seems to have the lib installed... I assume it's the 64 version. Is there any way to get and install the 32 bit version to be used by the DB2 installer?
linux, db2, redhat
11
72,131
3
https://stackoverflow.com/questions/11228078/how-do-i-get-libpam-so-0-32-bit-on-my-64bit-rhel6
20,792,829
How to check recently installed rpms?
I am trying to find some recently installed rpms on my RedHat Linux system. Does RPM provide any way to do this? I have tried # rpm -qa But it only provides installed rpms. What are the options available for this?
How to check recently installed rpms? I am trying to find some recently installed rpms on my RedHat Linux system. Does RPM provide any way to do this? I have tried # rpm -qa But it only provides installed rpms. What are the options available for this?
linux, centos, redhat, rpm
11
14,893
2
https://stackoverflow.com/questions/20792829/how-to-check-recently-installed-rpms
21,683,138
Unable to install rgdal and rgeos R libraries on Red hat linux
I have error while compiling rgdal adn rgoes package on our redhat linux machine. I tried to do some research but couldn't find a possible solution. Could you please help me with this as this is very important for me to solve. **ERROR WHILE COMPILING RGDAL in R 3.0** **strong text** * installing *source* package ârgdalâ ... ** package ârgdalâ successfully unpacked and MD5 sums checked configure: CC: gcc -std=gnu99 configure: CXX: g++ configure: rgdal: 0.8-10 checking for /usr/bin/svnversion... yes configure: svn revision: 496 configure: gdal-config: gdal-config checking gdal-config usability... ./configure: line 1397: gdal-config: command not found no Error: gdal-config not found The gdal-config script distributed with GDAL could not be found. If you have not installed the GDAL libraries, you can download the source from [URL] If you have installed the GDAL libraries, then make sure that gdal-config is in your path. Try typing gdal-config at a shell prompt and see if it runs. If not, use: --configure-args='--with-gdal-config=/usr/local/bin/gdal-config' with appropriate values for your installation. ERROR: configuration failed for package ârgdalâ *****ERROR WHILE COMPILING RGEOS:***** **strong text** * installing *source* package ârgeosâ ... ** package ârgeosâ successfully unpacked and MD5 sums checked configure: CC: gcc -std=gnu99 configure: CXX: g++ configure: rgeos: 0.2-17 checking for /usr/bin/svnversion... yes configure: svn revision: 413M checking geos-config usability... ./configure: line 1385: geos-config: command not found no configure: error: geos-config not usable ERROR: configuration failed for package ârgeosâ
Unable to install rgdal and rgeos R libraries on Red hat linux I have error while compiling rgdal adn rgoes package on our redhat linux machine. I tried to do some research but couldn't find a possible solution. Could you please help me with this as this is very important for me to solve. **ERROR WHILE COMPILING RGDAL in R 3.0** **strong text** * installing *source* package ârgdalâ ... ** package ârgdalâ successfully unpacked and MD5 sums checked configure: CC: gcc -std=gnu99 configure: CXX: g++ configure: rgdal: 0.8-10 checking for /usr/bin/svnversion... yes configure: svn revision: 496 configure: gdal-config: gdal-config checking gdal-config usability... ./configure: line 1397: gdal-config: command not found no Error: gdal-config not found The gdal-config script distributed with GDAL could not be found. If you have not installed the GDAL libraries, you can download the source from [URL] If you have installed the GDAL libraries, then make sure that gdal-config is in your path. Try typing gdal-config at a shell prompt and see if it runs. If not, use: --configure-args='--with-gdal-config=/usr/local/bin/gdal-config' with appropriate values for your installation. ERROR: configuration failed for package ârgdalâ *****ERROR WHILE COMPILING RGEOS:***** **strong text** * installing *source* package ârgeosâ ... ** package ârgeosâ successfully unpacked and MD5 sums checked configure: CC: gcc -std=gnu99 configure: CXX: g++ configure: rgeos: 0.2-17 checking for /usr/bin/svnversion... yes configure: svn revision: 413M checking geos-config usability... ./configure: line 1385: geos-config: command not found no configure: error: geos-config not usable ERROR: configuration failed for package ârgeosâ
r, redhat, geos, rgdal
11
8,755
2
https://stackoverflow.com/questions/21683138/unable-to-install-rgdal-and-rgeos-r-libraries-on-red-hat-linux
12,961,336
I am unable to run a C++ program in Debian(Ubuntu) that works in Redhat(Centos)
TLDR: Having trouble compiling a C++ program that worked in Centos Redhat in Ubuntu Debian. Is there anything I Should be aware of between these two that would make a C++ program compiled using the same compiler not work? Hello, I'm trying to compile and run Germline ([URL] It works fine in RedHat Centos, but because Centos isn't as supported as Ubuntu is for most things I switched. And now this program does not work. It's entirely possible it's using some kind of RedHat only functionality, but I'm using the same compiler (g++) to compile it in both environments. I've been pulling my hair out just trying to get this thing to work on Ubuntu as it is much nicer to work with, but as of now when I "make all" the project in ubuntu it will compile and the tests spin(Don't ever finish) forever. No matter what binaries I use (Compiled in Centos and copied, the failed test binaries I just mentioned etc), the program just always freezes. Kinda long, sorry. My main question is this: Is there any other C++ compiler alternatives I can try? Is there any Red-hat C++ libraries I might be missing. Or major differences in their C++ implementations that mighjt cause this?
I am unable to run a C++ program in Debian(Ubuntu) that works in Redhat(Centos) TLDR: Having trouble compiling a C++ program that worked in Centos Redhat in Ubuntu Debian. Is there anything I Should be aware of between these two that would make a C++ program compiled using the same compiler not work? Hello, I'm trying to compile and run Germline ([URL] It works fine in RedHat Centos, but because Centos isn't as supported as Ubuntu is for most things I switched. And now this program does not work. It's entirely possible it's using some kind of RedHat only functionality, but I'm using the same compiler (g++) to compile it in both environments. I've been pulling my hair out just trying to get this thing to work on Ubuntu as it is much nicer to work with, but as of now when I "make all" the project in ubuntu it will compile and the tests spin(Don't ever finish) forever. No matter what binaries I use (Compiled in Centos and copied, the failed test binaries I just mentioned etc), the program just always freezes. Kinda long, sorry. My main question is this: Is there any other C++ compiler alternatives I can try? Is there any Red-hat C++ libraries I might be missing. Or major differences in their C++ implementations that mighjt cause this?
c++, ubuntu, centos, debian, redhat
11
3,089
5
https://stackoverflow.com/questions/12961336/i-am-unable-to-run-a-c-program-in-debianubuntu-that-works-in-redhatcentos
55,457,902
Keycloak Customization to run custom java in authentication flow
Please let me know if this is not the right place to post, but I have been looking all over for information regarding this and can't seem to find a concise answer. I have been attempting to use keycloak to meet our application's user management requirements. While I have found keycloak to be very capable and quite effective, I have run into what may be a dead end for our usage. Background: Traditionally, our application has used a very basic login framework that would verify the authentication. Then using a third party application, that we cannot change , identify the roles that user would have via a wsdl operation and insert into our applications database. For example, if we verify the user John Doe exists and authenticate his credentials, we call the wsdl in our java code to get what roles that user should have (super user, guest, regular user). Obviously this entire framework is pretty flawed and at the end of the day, this is why weve chosen to use keycloak. Problem Unfortunately, as I mentioned we cannot change the third party application, and we must get user role mappings from this wsdl operation. I know there is a way to create/modify keycloak's users and roles via java functions. However, in order to keep this architecture modular is there a way to configure the authentication flow to reach out to this WSDL on keycloaks side for role mapping ? (i.e. not in the application code but maybe in a scriplet in the authentication flow) What I am looking for is essentially how to configure the authentication flow to run something as simple as "hello world" in java after the credentials are verified but before access is granted. Not sure if the Authentication SPI could be used
Keycloak Customization to run custom java in authentication flow Please let me know if this is not the right place to post, but I have been looking all over for information regarding this and can't seem to find a concise answer. I have been attempting to use keycloak to meet our application's user management requirements. While I have found keycloak to be very capable and quite effective, I have run into what may be a dead end for our usage. Background: Traditionally, our application has used a very basic login framework that would verify the authentication. Then using a third party application, that we cannot change , identify the roles that user would have via a wsdl operation and insert into our applications database. For example, if we verify the user John Doe exists and authenticate his credentials, we call the wsdl in our java code to get what roles that user should have (super user, guest, regular user). Obviously this entire framework is pretty flawed and at the end of the day, this is why weve chosen to use keycloak. Problem Unfortunately, as I mentioned we cannot change the third party application, and we must get user role mappings from this wsdl operation. I know there is a way to create/modify keycloak's users and roles via java functions. However, in order to keep this architecture modular is there a way to configure the authentication flow to reach out to this WSDL on keycloaks side for role mapping ? (i.e. not in the application code but maybe in a scriplet in the authentication flow) What I am looking for is essentially how to configure the authentication flow to run something as simple as "hello world" in java after the credentials are verified but before access is granted. Not sure if the Authentication SPI could be used
java, security, architecture, redhat, keycloak
11
8,718
2
https://stackoverflow.com/questions/55457902/keycloak-customization-to-run-custom-java-in-authentication-flow
71,089,827
Is there any easy way to convert (CRD) CustomResourceDefinition to json schema?
Developing CRDs for Kubernetes, using VScode as an IDE. Want to provide autocompletion and Intellisense in IDE. It needs a JSON schema to do so. I have a huge number of CRDs to support. I want to do it in an easy way to convert CRDs to JSON schema.
Is there any easy way to convert (CRD) CustomResourceDefinition to json schema? Developing CRDs for Kubernetes, using VScode as an IDE. Want to provide autocompletion and Intellisense in IDE. It needs a JSON schema to do so. I have a huge number of CRDs to support. I want to do it in an easy way to convert CRDs to JSON schema.
kubernetes, visual-studio-code, redhat
11
6,664
2
https://stackoverflow.com/questions/71089827/is-there-any-easy-way-to-convert-crd-customresourcedefinition-to-json-schema
20,992,356
GDB jumps to wrong lines in out of order fashion
Application Setup : I've C++11 application consuming the following 3rd party libraries : boost 1.51.0 cppnetlib 0.9.4 jsoncpp 0.5.0 The application code relies on several in-house shared objects, all of them developed by my team (classical link time against those shared objects is carried out, no usage of dlopen etc.) I'm using GCC 4.6.2 and the issue appears when using GDB 7.4 and 7.6. OS - Red Hat Linux release 7.0 (Guinness) x86-64 The issue While hitting breakpoints within the shared objects code, and issuing gdb next command, sometimes GDB jumps backward to certain lines w/o any plausible reason (especially after exceptions are thrown, for those exceptions there suitable catch blocks) Similar issues in the web are answered in something along the lines 'turn off any GCC optimization) but my GCC CL clearly doesn't use any optimization and asked to have debug information, pls note the -O0 & -g switches : COLLECT_GCC_OPTIONS= '-D' '_DEBUG' '-O0' '-g' '-Wall' '-fmessage-length=0' '-v' '-fPIC' '-D' 'BOOST_ALL_DYN_LINK' '-D' 'BOOST_PARAMETER_MAX_ARITY=15' '-D' '_GLIBCXX_USE_NANOSLEEP' '-Wno-deprecated' '-std=c++0x' '-fvisibility=hidden' '-c' '-MMD' '-MP' '-MF' 'Debug_x64/AgentRegisterer.d' '-MT' 'Debug_x64/AgentRegisterer.d' '-MT' 'Debug_x64/AgentRegisterer.o' '-o' 'Debug_x64/AgentRegisterer.o' '-shared-libgcc' '-mtune=generic' '-march=x86-64' Please also note as per Linux DSO best known methods , we have hidden visibility of symbols, only classes we would like to expose are being exposed (maybe this is related ???) What should be the next steps in root causing this issue ?
GDB jumps to wrong lines in out of order fashion Application Setup : I've C++11 application consuming the following 3rd party libraries : boost 1.51.0 cppnetlib 0.9.4 jsoncpp 0.5.0 The application code relies on several in-house shared objects, all of them developed by my team (classical link time against those shared objects is carried out, no usage of dlopen etc.) I'm using GCC 4.6.2 and the issue appears when using GDB 7.4 and 7.6. OS - Red Hat Linux release 7.0 (Guinness) x86-64 The issue While hitting breakpoints within the shared objects code, and issuing gdb next command, sometimes GDB jumps backward to certain lines w/o any plausible reason (especially after exceptions are thrown, for those exceptions there suitable catch blocks) Similar issues in the web are answered in something along the lines 'turn off any GCC optimization) but my GCC CL clearly doesn't use any optimization and asked to have debug information, pls note the -O0 & -g switches : COLLECT_GCC_OPTIONS= '-D' '_DEBUG' '-O0' '-g' '-Wall' '-fmessage-length=0' '-v' '-fPIC' '-D' 'BOOST_ALL_DYN_LINK' '-D' 'BOOST_PARAMETER_MAX_ARITY=15' '-D' '_GLIBCXX_USE_NANOSLEEP' '-Wno-deprecated' '-std=c++0x' '-fvisibility=hidden' '-c' '-MMD' '-MP' '-MF' 'Debug_x64/AgentRegisterer.d' '-MT' 'Debug_x64/AgentRegisterer.d' '-MT' 'Debug_x64/AgentRegisterer.o' '-o' 'Debug_x64/AgentRegisterer.o' '-shared-libgcc' '-mtune=generic' '-march=x86-64' Please also note as per Linux DSO best known methods , we have hidden visibility of symbols, only classes we would like to expose are being exposed (maybe this is related ???) What should be the next steps in root causing this issue ?
c++, c++11, gdb, g++, redhat
11
4,457
3
https://stackoverflow.com/questions/20992356/gdb-jumps-to-wrong-lines-in-out-of-order-fashion
54,470,463
Is there a specification for the YUM metadata?
I'm trying to find a trusted point of truth for the following yum metadata files: primary.xml.gz filelists.xml.gz other.xml.gz repomd.gz groups.xml.gz I've been looking around the Internet, but I haven't found a definitive reference, or guide. Is there a concrete specification, or RFC for this, or is this open for interpretation and implementation? I've come across these useful links: Anatomy of YUM Repositories: A Look Under The Hood YUM Repository And Package Management: Complete Tutorial openSUSE: Standards RPM Metadata But I haven't managed to find an actual specification for this. Does anybody know if there is one, or where to find more details?
Is there a specification for the YUM metadata? I'm trying to find a trusted point of truth for the following yum metadata files: primary.xml.gz filelists.xml.gz other.xml.gz repomd.gz groups.xml.gz I've been looking around the Internet, but I haven't found a definitive reference, or guide. Is there a concrete specification, or RFC for this, or is this open for interpretation and implementation? I've come across these useful links: Anatomy of YUM Repositories: A Look Under The Hood YUM Repository And Package Management: Complete Tutorial openSUSE: Standards RPM Metadata But I haven't managed to find an actual specification for this. Does anybody know if there is one, or where to find more details?
redhat, rpm, yum
11
1,222
0
https://stackoverflow.com/questions/54470463/is-there-a-specification-for-the-yum-metadata
36,545,206
How to install specific version of Docker on Centos?
I tried to install docker 1.8.2 on Centos7. The docs don't tell anything about versioning. Someone who can help me? I tried wget -qO- [URL] | sed 's/lxc-docker/lxc-docker-1.8.2/' | sh + sh -c 'sleep 3; yum -y -q install docker-engine' but didn't work. EDIT: I performed: yum install -y [URL] That works but I miss options as docker-storage-setup and docker-fetch
How to install specific version of Docker on Centos? I tried to install docker 1.8.2 on Centos7. The docs don't tell anything about versioning. Someone who can help me? I tried wget -qO- [URL] | sed 's/lxc-docker/lxc-docker-1.8.2/' | sh + sh -c 'sleep 3; yum -y -q install docker-engine' but didn't work. EDIT: I performed: yum install -y [URL] That works but I miss options as docker-storage-setup and docker-fetch
docker, centos, redhat
10
45,907
5
https://stackoverflow.com/questions/36545206/how-to-install-specific-version-of-docker-on-centos
54,034,302
Creating mailbox file: File exists
I added user through command adduser satya I deleted the same user by userdel satya When I tried adding again useradd satya I got the following error: Creating mailbox file: File exists
Creating mailbox file: File exists I added user through command adduser satya I deleted the same user by userdel satya When I tried adding again useradd satya I got the following error: Creating mailbox file: File exists
linux, redhat
10
22,646
2
https://stackoverflow.com/questions/54034302/creating-mailbox-file-file-exists
45,569,367
Upgrade RHEL from 7.3 to 7.4: ArrayIndexOutOfBoundsException in sun.font.CompositeStrike.getStrikeForSlot
We just upgraded a server from RHEL v7.3 to v7.4 . This simple program works in RHEL v7.3 and fails in v7.4 public class TestJava { public static void main(String[] args) { Font font = new Font("SansSerif", Font.PLAIN, 12); FontRenderContext frc = new FontRenderContext(null, false, false); TextLayout layout = new TextLayout("\ude00", font, frc); layout.getCaretShapes(0); System.out.println(layout); } } The exception in RHEL 7.4 is : Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0 at sun.font.CompositeStrike.getStrikeForSlot(CompositeStrike.java:75) at sun.font.CompositeStrike.getFontMetrics(CompositeStrike.java:93) at sun.font.Font2D.getFontMetrics(Font2D.java:415) at java.awt.Font.defaultLineMetrics(Font.java:2176) at java.awt.Font.getLineMetrics(Font.java:2283) at java.awt.font.TextLayout.fastInit(TextLayout.java:598) at java.awt.font.TextLayout.<init>(TextLayout.java:393) Te result on RHEL v7.3 is: sun.font.StandardTextSource@7ba4f24f[start:0, len:1, cstart:0, clen:1, chars:"de00", level:0, flags:0, font:java.awt.Font[family=SansSerif,name=SansSerif,style=plain,size=12], frc:java.awt.font.FontRenderContext@c14b833b, cm:sun.font.CoreMetrics@412ae196] The update of RHEL v7.4 includes an update of openjdk from 1.8.0.131 to 1.8.0.141 but this does not seems to be related to the version of openjdk , as the problem is the same with the IBM JDK coming with WebSphere v9.0 ( v1.8.0 SR4 FP6 ). With the same version of the IBM JDK on a RHEL v7.3 and RHEL v7.4 server, the program works in RH 7.3 and fails in RH 7.4 the same way as with openjdk Any idea what's going on?
Upgrade RHEL from 7.3 to 7.4: ArrayIndexOutOfBoundsException in sun.font.CompositeStrike.getStrikeForSlot We just upgraded a server from RHEL v7.3 to v7.4 . This simple program works in RHEL v7.3 and fails in v7.4 public class TestJava { public static void main(String[] args) { Font font = new Font("SansSerif", Font.PLAIN, 12); FontRenderContext frc = new FontRenderContext(null, false, false); TextLayout layout = new TextLayout("\ude00", font, frc); layout.getCaretShapes(0); System.out.println(layout); } } The exception in RHEL 7.4 is : Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0 at sun.font.CompositeStrike.getStrikeForSlot(CompositeStrike.java:75) at sun.font.CompositeStrike.getFontMetrics(CompositeStrike.java:93) at sun.font.Font2D.getFontMetrics(Font2D.java:415) at java.awt.Font.defaultLineMetrics(Font.java:2176) at java.awt.Font.getLineMetrics(Font.java:2283) at java.awt.font.TextLayout.fastInit(TextLayout.java:598) at java.awt.font.TextLayout.<init>(TextLayout.java:393) Te result on RHEL v7.3 is: sun.font.StandardTextSource@7ba4f24f[start:0, len:1, cstart:0, clen:1, chars:"de00", level:0, flags:0, font:java.awt.Font[family=SansSerif,name=SansSerif,style=plain,size=12], frc:java.awt.font.FontRenderContext@c14b833b, cm:sun.font.CoreMetrics@412ae196] The update of RHEL v7.4 includes an update of openjdk from 1.8.0.131 to 1.8.0.141 but this does not seems to be related to the version of openjdk , as the problem is the same with the IBM JDK coming with WebSphere v9.0 ( v1.8.0 SR4 FP6 ). With the same version of the IBM JDK on a RHEL v7.3 and RHEL v7.4 server, the program works in RH 7.3 and fails in RH 7.4 the same way as with openjdk Any idea what's going on?
awt, redhat, java, ibm-jdk
10
22,372
4
https://stackoverflow.com/questions/45569367/upgrade-rhel-from-7-3-to-7-4-arrayindexoutofboundsexception-in-sun-font-composi
22,014,397
C - Implicit declaration of the function &quot;pthread_timedjoin_np&quot;
I am porting a windows library to linux. I need to use timed join to wait for the thread to join in a specific timeout. When I compile the library on Linux I am getting the warning Implicit declaration of the function - pthread_timedjoin_np I have included pthread.h and have compiled with -lpthread link. I know that pthread_timedjoin_np is a non-standard GNU function. The function first appeared in glibc in version 2.3.3. and somewhere in BCD v6. I even checked the Man Page for Linux but got no help. How do I avoid this warning? Any help? Edit-1: My system is RedHat 5.
C - Implicit declaration of the function &quot;pthread_timedjoin_np&quot; I am porting a windows library to linux. I need to use timed join to wait for the thread to join in a specific timeout. When I compile the library on Linux I am getting the warning Implicit declaration of the function - pthread_timedjoin_np I have included pthread.h and have compiled with -lpthread link. I know that pthread_timedjoin_np is a non-standard GNU function. The function first appeared in glibc in version 2.3.3. and somewhere in BCD v6. I even checked the Man Page for Linux but got no help. How do I avoid this warning? Any help? Edit-1: My system is RedHat 5.
c, linux, multithreading, redhat, porting
10
13,396
1
https://stackoverflow.com/questions/22014397/c-implicit-declaration-of-the-function-pthread-timedjoin-np
18,766,930
Resolve GCC error when installing python-ldap on Redhat Enterprise Server
Python-LDAP + Redhat = Gnashing of Teeth Recently, I spent a few hours tearing my hair (or what's left of it) out attempting to install python-ldap (via pip) onto a Redhat Enterprise server. Here's the error message that I would get (look familiar?): Modules/constants.c:365: error: ‘LDAP_CONTROL_RELAX’ undeclared (first use in this function) error: command 'gcc' failed with exit status 1 If only there was someone out there that could help me!
Resolve GCC error when installing python-ldap on Redhat Enterprise Server Python-LDAP + Redhat = Gnashing of Teeth Recently, I spent a few hours tearing my hair (or what's left of it) out attempting to install python-ldap (via pip) onto a Redhat Enterprise server. Here's the error message that I would get (look familiar?): Modules/constants.c:365: error: ‘LDAP_CONTROL_RELAX’ undeclared (first use in this function) error: command 'gcc' failed with exit status 1 If only there was someone out there that could help me!
python, redhat, python-ldap
10
9,799
2
https://stackoverflow.com/questions/18766930/resolve-gcc-error-when-installing-python-ldap-on-redhat-enterprise-server
27,862,664
(13)Permission denied: Error retrieving pid file run/httpd.pid
I have installed httpd-2.2.29 using commands: ./configure --prefix=/home/user/httpd make make install I configured httpd.conf and tried to start with apache: apachectl start . But got following error: (13)Permission denied: Error retrieving pid file run/httpd.pid Remove it before continuing if it is corrupted. I tried to find file httpd.pid , but where is no such file. Could someone help me resolve such issue?
(13)Permission denied: Error retrieving pid file run/httpd.pid I have installed httpd-2.2.29 using commands: ./configure --prefix=/home/user/httpd make make install I configured httpd.conf and tried to start with apache: apachectl start . But got following error: (13)Permission denied: Error retrieving pid file run/httpd.pid Remove it before continuing if it is corrupted. I tried to find file httpd.pid , but where is no such file. Could someone help me resolve such issue?
apache, redhat
10
29,108
4
https://stackoverflow.com/questions/27862664/13permission-denied-error-retrieving-pid-file-run-httpd-pid
540,907
How can I tell if I&#39;m running in a VMWARE virtual machine (from linux)?
I have a VMWARE ESX server. I have Redhat VMs running on that server. I need a way of programatically testing if I'm running in a VM. Ideally, I'd like to know how to do this from Perl.
How can I tell if I&#39;m running in a VMWARE virtual machine (from linux)? I have a VMWARE ESX server. I have Redhat VMs running on that server. I need a way of programatically testing if I'm running in a VM. Ideally, I'd like to know how to do this from Perl.
perl, vmware, redhat, esx
10
27,031
5
https://stackoverflow.com/questions/540907/how-can-i-tell-if-im-running-in-a-vmware-virtual-machine-from-linux
25,695,346
How can I auto-deploy my git repo&#39;s submodules on push?
I have a PHP Cartridge that is operating normally, except I can't find a straightforward way to get OpenShift to (recursively) push the files for my git submodules when/after it pushes my core repo files. This seems like it should be a super straightforward and common use-case. Am I overlooking something? I could probably ssh into my server and pull them manually, but I'd like to automate this completely, so that if I update the submodule's reference in my repo these changes will be reflected when I deploy.
How can I auto-deploy my git repo&#39;s submodules on push? I have a PHP Cartridge that is operating normally, except I can't find a straightforward way to get OpenShift to (recursively) push the files for my git submodules when/after it pushes my core repo files. This seems like it should be a super straightforward and common use-case. Am I overlooking something? I could probably ssh into my server and pull them manually, but I'd like to automate this completely, so that if I update the submodule's reference in my repo these changes will be reflected when I deploy.
git, deployment, openshift, redhat
10
2,623
2
https://stackoverflow.com/questions/25695346/how-can-i-auto-deploy-my-git-repos-submodules-on-push
32,746,419
When and Why run alternatives --install java jar javac javaws on installing jdk in linux
To install java in linux (I used CentOS, RHEL is same too), I used this command rpm -Uvh /path/to/binary/jdk-7u55-linux-x64.rpm and verified java java -version Looking at a tutorial, it says to run following 4 commands, not sure why ## java ## alternatives --install /usr/bin/java java /usr/java/latest/jre/bin/java 200000 ## javaws ## alternatives --install /usr/bin/javaws javaws /usr/java/latest/jre/bin/javaws 200000 ## Install javac only alternatives --install /usr/bin/javac javac /usr/java/latest/bin/javac 200000 ## jar ## alternatives --install /usr/bin/jar jar /usr/java/latest/bin/jar 200000 I know if there are multiple versions of java installed, you can select version to use from alternatives --config java then why to run alternative --install separately for each executable. I've seen this question but doesn't get my answer
When and Why run alternatives --install java jar javac javaws on installing jdk in linux To install java in linux (I used CentOS, RHEL is same too), I used this command rpm -Uvh /path/to/binary/jdk-7u55-linux-x64.rpm and verified java java -version Looking at a tutorial, it says to run following 4 commands, not sure why ## java ## alternatives --install /usr/bin/java java /usr/java/latest/jre/bin/java 200000 ## javaws ## alternatives --install /usr/bin/javaws javaws /usr/java/latest/jre/bin/javaws 200000 ## Install javac only alternatives --install /usr/bin/javac javac /usr/java/latest/bin/javac 200000 ## jar ## alternatives --install /usr/bin/jar jar /usr/java/latest/bin/jar 200000 I know if there are multiple versions of java installed, you can select version to use from alternatives --config java then why to run alternative --install separately for each executable. I've seen this question but doesn't get my answer
java, linux, redhat
10
22,886
6
https://stackoverflow.com/questions/32746419/when-and-why-run-alternatives-install-java-jar-javac-javaws-on-installing-jdk
25,855,331
Installing rabbitmq-server on RHEL
When trying to install rabbitmq-server on RHEL: [ec2-user@ip-172-31-34-1XX ~]$ sudo rpm -i rabbitmq-server-3.3.5-1.noarch.rpm error: Failed dependencies: erlang >= R13B-03 is needed by rabbitmq-server-3.3.5-1.noarch [ec2-user@ip-172-31-34-1XX ~]$ rpm -i rabbitmq-server-3.3.5-1.noarch.rpm error: Failed dependencies: erlang >= R13B-03 is needed by rabbitmq-server-3.3.5-1.noarch I'm unsure why trying to rpm install isn't recognizing my erlang install since running $ erl gives: [ec2-user@ip-172-31-34-1XX ~]$ which erl /usr/local/bin/erl [ec2-user@ip-172-31-34-1XX ~]$ sudo which erl /bin/erl
Installing rabbitmq-server on RHEL When trying to install rabbitmq-server on RHEL: [ec2-user@ip-172-31-34-1XX ~]$ sudo rpm -i rabbitmq-server-3.3.5-1.noarch.rpm error: Failed dependencies: erlang >= R13B-03 is needed by rabbitmq-server-3.3.5-1.noarch [ec2-user@ip-172-31-34-1XX ~]$ rpm -i rabbitmq-server-3.3.5-1.noarch.rpm error: Failed dependencies: erlang >= R13B-03 is needed by rabbitmq-server-3.3.5-1.noarch I'm unsure why trying to rpm install isn't recognizing my erlang install since running $ erl gives: [ec2-user@ip-172-31-34-1XX ~]$ which erl /usr/local/bin/erl [ec2-user@ip-172-31-34-1XX ~]$ sudo which erl /bin/erl
erlang, rabbitmq, redhat, rhel
10
16,715
2
https://stackoverflow.com/questions/25855331/installing-rabbitmq-server-on-rhel
18,338,045
Enabling &quot;Software collections&quot;. RedHat developer toolset
I just found out that RedHat provides this "Developer toolset" which allows me to install (and of course use) the most up-to-date gcc-4.7.2. I use it on Centos, but the process is the same. Once installed, you can start a new bash session with this toolset enabled by issuing: scl enable devtoolset-1.1 bash That works all right. Now, could I somehow add this to my bashrc since this actually starts a new bash session? Or should I better place it inside my makefiles to avoid starting a new bash session. Would there be a way to issue this within a makefile?
Enabling &quot;Software collections&quot;. RedHat developer toolset I just found out that RedHat provides this "Developer toolset" which allows me to install (and of course use) the most up-to-date gcc-4.7.2. I use it on Centos, but the process is the same. Once installed, you can start a new bash session with this toolset enabled by issuing: scl enable devtoolset-1.1 bash That works all right. Now, could I somehow add this to my bashrc since this actually starts a new bash session? Or should I better place it inside my makefiles to avoid starting a new bash session. Would there be a way to issue this within a makefile?
makefile, centos, redhat, devtoolset, redhat-dts
10
5,344
2
https://stackoverflow.com/questions/18338045/enabling-software-collections-redhat-developer-toolset
45,326,347
How to know that docker installed in redhat is community or enterprise edition?
Some person has install docker in my Redhat system . I want to know whether it is community edition or enterprise edition . How can i do so? I know community edition is not for Redhat . May be some person would have created centos.repo in Redhat and installed docker ce . This is what docker version gives When i do "rpm -qif /usr/bin/docker"
How to know that docker installed in redhat is community or enterprise edition? Some person has install docker in my Redhat system . I want to know whether it is community edition or enterprise edition . How can i do so? I know community edition is not for Redhat . May be some person would have created centos.repo in Redhat and installed docker ce . This is what docker version gives When i do "rpm -qif /usr/bin/docker"
docker, redhat
10
11,443
3
https://stackoverflow.com/questions/45326347/how-to-know-that-docker-installed-in-redhat-is-community-or-enterprise-edition
9,317,683
What would cause PHP variables to be rewritten by the server?
I was given a VM at my company to install web software on. But I came across a rather bizarre issue where PHP variables would be overwritten (rewritten) by the server if they matched a specific pattern. What could rewrite PHP variables like this? The following is as an entire standalone script. <?php $foo = 'b.domain.com'; echo $foo; // 'dev01.sandbox.b.domain.com' $bar = 'dev01.sandbox.domain.com'; echo $bar; // 'dev01.sandbox.sandbox.domain.com' $var = 'b.domainfoo.com'; echo $var; // 'b.domainfoo.com' (not overwritten because it didn't match whatever RegEx has been set) ?> Essentially any variable which contains a subdomain and matches on the domain name would be rewritten. This isn't something mod_rewrite would be able to touch, so it has to be something at the server level that is parsing out PHP and rewriting a string if it matches a RegEx.
What would cause PHP variables to be rewritten by the server? I was given a VM at my company to install web software on. But I came across a rather bizarre issue where PHP variables would be overwritten (rewritten) by the server if they matched a specific pattern. What could rewrite PHP variables like this? The following is as an entire standalone script. <?php $foo = 'b.domain.com'; echo $foo; // 'dev01.sandbox.b.domain.com' $bar = 'dev01.sandbox.domain.com'; echo $bar; // 'dev01.sandbox.sandbox.domain.com' $var = 'b.domainfoo.com'; echo $var; // 'b.domainfoo.com' (not overwritten because it didn't match whatever RegEx has been set) ?> Essentially any variable which contains a subdomain and matches on the domain name would be rewritten. This isn't something mod_rewrite would be able to touch, so it has to be something at the server level that is parsing out PHP and rewriting a string if it matches a RegEx.
php, apache, url-rewriting, redhat
10
388
1
https://stackoverflow.com/questions/9317683/what-would-cause-php-variables-to-be-rewritten-by-the-server
70,458,779
RHEL8.5 shell &quot;BASH_FUNC_which%%&quot; environment variable causes K8S pods to fail
Problem After moving to RHEL 8.5 from 8.4, started having the issue of K8S pods failure. spec.template.spec.containers[0].env[52].name: Invalid value: "BASH_FUNC_which%%": a valid environment variable name must consist of alphabetic characters, digits, '_', '-', or '.', and must not start with a digit (e.g. 'my.env-name', or 'MY_ENV.NAME', or 'MyEnvName1', regex used for validation is '[-._a-zA-Z][-._a-zA-Z0-9]*') The env command in the login shell shows BASH_FUNC_which%% defined as below. BASH_FUNC_which%%=() { ( alias; eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot "$@" } Suggeted that /etc/profile.d/which2.sh is the one that sets up the BASH_FUNC_which%% . /etc/profile.d/which2.sh # shellcheck shell=sh # Initialization script for bash, sh, mksh and ksh which_declare="declare -f" which_opt="-f" which_shell="$(cat /proc/$$/comm)" if [ "$which_shell" = "ksh" ] || [ "$which_shell" = "mksh" ] || [ "$which_shell" = "zsh" ] ; then which_declare="typeset -f" which_opt="" fi which () { (alias; eval ${which_declare}) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot "$@" } export which_declare export ${which_opt} which By removing it, the issue was fixed. Question Please help understand where exactly BASH_FUNC_which%% is setup in RHEL8.5 and what is the purpose of this BASH_FUNC_which%% , why is has been introduced in RHEL.
RHEL8.5 shell &quot;BASH_FUNC_which%%&quot; environment variable causes K8S pods to fail Problem After moving to RHEL 8.5 from 8.4, started having the issue of K8S pods failure. spec.template.spec.containers[0].env[52].name: Invalid value: "BASH_FUNC_which%%": a valid environment variable name must consist of alphabetic characters, digits, '_', '-', or '.', and must not start with a digit (e.g. 'my.env-name', or 'MY_ENV.NAME', or 'MyEnvName1', regex used for validation is '[-._a-zA-Z][-._a-zA-Z0-9]*') The env command in the login shell shows BASH_FUNC_which%% defined as below. BASH_FUNC_which%%=() { ( alias; eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot "$@" } Suggeted that /etc/profile.d/which2.sh is the one that sets up the BASH_FUNC_which%% . /etc/profile.d/which2.sh # shellcheck shell=sh # Initialization script for bash, sh, mksh and ksh which_declare="declare -f" which_opt="-f" which_shell="$(cat /proc/$$/comm)" if [ "$which_shell" = "ksh" ] || [ "$which_shell" = "mksh" ] || [ "$which_shell" = "zsh" ] ; then which_declare="typeset -f" which_opt="" fi which () { (alias; eval ${which_declare}) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot "$@" } export which_declare export ${which_opt} which By removing it, the issue was fixed. Question Please help understand where exactly BASH_FUNC_which%% is setup in RHEL8.5 and what is the purpose of this BASH_FUNC_which%% , why is has been introduced in RHEL.
kubernetes, environment-variables, redhat
10
5,970
1
https://stackoverflow.com/questions/70458779/rhel8-5-shell-bash-func-which-environment-variable-causes-k8s-pods-to-fail
33,817,481
Large PCIe DMA Linux x86-64
I am working with a high speed serial card for high rate data transfers from an external source to a Linux box with a PCIe card. The PCIe card came with some 3rd party drivers that use dma_alloc_coherent to allocate the dma buffers to receive the data. Due to Linux limitations however, this approach limits data transfers to 4MB. I have been reading and trying multiple methods for allocating a large DMA buffer and haven't been able to get one to work. This system has 32GB of memory and is running Red Hat with a kernel version of 3.10 and I would like to make 4GB of that available for a contiguous DMA. I know the preferred method is scatter/gather, but this is not possible in my situation as there is a hardware chip that translated the serial protocol into a DMA beyond my control, where the only thing that I can control is adding an offset to the incoming addresses (ie, address zero as seen from the external system can be mapped to address 0x700000000 on the local bus). Since this is a one-off lab machine I think the fastest/easiest approach would be to use mem=28GB boot configuration parameter. I have this working fine, but the next step to access that memory from virtual space is where I am having problems. Here is my code condensed to the relevant components: In the kernel module: size_t len = 0x100000000ULL; // 4GB size_t phys = 0x700000000ULL; // 28GB size_t virt = ioremap_nocache( phys, len ); // address not usable via direct reference size_t bus = (size_t)virt_to_bus( (void*)virt ); // this should be the same as phys for x86-64, shouldn't it? // OLD WAY /*size_t len = 0x400000; // 4MB size_t bus; size_t virt = dma_alloc_coherent( devHandle, len, &bus, GFP_ATOMIC ); size_t phys = (size_t)virt_to_phys( (void*)virt );*/ In the application: // Attempt to make a usable virtual pointer u32 pSize = sysconf(_SC_PAGESIZE); void* mapAddr = mmap(0, len+(phys%pSize), PROT_READ|PROT_WRITE, MAP_SHARED, devHandle, phys-(phys%pSize)); virt = (size_t)mapAddr + (phys%pSize); // do DMA to 0x700000000 bus address printf("Value %x\n", *((u32*)virt)); // this is returning zero Another interesting thing is that before doing all of this, the physical address returned from dma_alloc_coherent is greater than the amount of RAM on the system(0x83d000000). I thought that in x86 the RAM will always be the lowest addresses and therefore I would expect an address less than 32GB. Any help would be appreciated.
Large PCIe DMA Linux x86-64 I am working with a high speed serial card for high rate data transfers from an external source to a Linux box with a PCIe card. The PCIe card came with some 3rd party drivers that use dma_alloc_coherent to allocate the dma buffers to receive the data. Due to Linux limitations however, this approach limits data transfers to 4MB. I have been reading and trying multiple methods for allocating a large DMA buffer and haven't been able to get one to work. This system has 32GB of memory and is running Red Hat with a kernel version of 3.10 and I would like to make 4GB of that available for a contiguous DMA. I know the preferred method is scatter/gather, but this is not possible in my situation as there is a hardware chip that translated the serial protocol into a DMA beyond my control, where the only thing that I can control is adding an offset to the incoming addresses (ie, address zero as seen from the external system can be mapped to address 0x700000000 on the local bus). Since this is a one-off lab machine I think the fastest/easiest approach would be to use mem=28GB boot configuration parameter. I have this working fine, but the next step to access that memory from virtual space is where I am having problems. Here is my code condensed to the relevant components: In the kernel module: size_t len = 0x100000000ULL; // 4GB size_t phys = 0x700000000ULL; // 28GB size_t virt = ioremap_nocache( phys, len ); // address not usable via direct reference size_t bus = (size_t)virt_to_bus( (void*)virt ); // this should be the same as phys for x86-64, shouldn't it? // OLD WAY /*size_t len = 0x400000; // 4MB size_t bus; size_t virt = dma_alloc_coherent( devHandle, len, &bus, GFP_ATOMIC ); size_t phys = (size_t)virt_to_phys( (void*)virt );*/ In the application: // Attempt to make a usable virtual pointer u32 pSize = sysconf(_SC_PAGESIZE); void* mapAddr = mmap(0, len+(phys%pSize), PROT_READ|PROT_WRITE, MAP_SHARED, devHandle, phys-(phys%pSize)); virt = (size_t)mapAddr + (phys%pSize); // do DMA to 0x700000000 bus address printf("Value %x\n", *((u32*)virt)); // this is returning zero Another interesting thing is that before doing all of this, the physical address returned from dma_alloc_coherent is greater than the amount of RAM on the system(0x83d000000). I thought that in x86 the RAM will always be the lowest addresses and therefore I would expect an address less than 32GB. Any help would be appreciated.
c++, linux, redhat, dma, pci-e
10
2,645
1
https://stackoverflow.com/questions/33817481/large-pcie-dma-linux-x86-64
15,719,605
mysql_install_db giving error
I have downloaded the mysql-5.1.38-linux-x86_64-glibc23.tar.gz from here and then i have executed it by using below command groupadd mysql useradd -g mysql mysql123 cp mysql-5.1.38-linux-x86_64-glibc23.tar.gz /home /mysql123/ su - mysql123 tar -zxvf mysql-5.1.38-linux-x86_64-glibc23.tar.gz mv mysql-5.1.38-linux-x86_64-glibc23 mysql mkdir tmp cd mysql/ mv suppport-files/my-medium.cnf my.cnf cp support-files/mysql.server bin/ and then i have edited the my.cnf and set the basedir and datadir to /home/mysql123/mysql and /home/mysql123/mysql/data and innodb_home_dir and logfile directory to datadir Now edited mysql.server and set the datadir and basedir in them properly and then initiated mysql_install_db as [mysql123@localhost mysql]$ ./scripts/mysql_install_db ./scripts/mysql_install_db: line 244: ./bin/my_print_defaults: cannot execute binary file Neither host '127.0.0.1' nor 'localhost' could be looked up with ./bin/resolveip Please configure the 'hostname' command to return a correct hostname. If you want to solve this at a later stage, restart this script with the --force option on seeing the error i thought it may be confused with basedir and executed the same as below [mysql123@localhost mysql]$ ./scripts/mysql_install_db -–user=mysql123 -–basedir=/home/mysql123/mysql ./scripts/mysql_install_db: line 244: ./bin/my_print_defaults: cannot execute binary file Neither host '127.0.0.1' nor 'localhost' could be looked up with ./bin/resolveip Please configure the 'hostname' command to return a correct hostname. If you want to solve this at a later stage, restart this script with the --force option i am not gettin what is going internally and showing this kind of message and i am sure that i have enough diskspace ( df -h ) and i have proper ownership ( chown mysq123:mysql /home/mysql123/ -R ) and proper permissions ( chmod 755 . ) and the lines in mysql_install_db are like below please any help to solve this problem is very useful ( and i have to follow the same installation process) i am using redhat 6
mysql_install_db giving error I have downloaded the mysql-5.1.38-linux-x86_64-glibc23.tar.gz from here and then i have executed it by using below command groupadd mysql useradd -g mysql mysql123 cp mysql-5.1.38-linux-x86_64-glibc23.tar.gz /home /mysql123/ su - mysql123 tar -zxvf mysql-5.1.38-linux-x86_64-glibc23.tar.gz mv mysql-5.1.38-linux-x86_64-glibc23 mysql mkdir tmp cd mysql/ mv suppport-files/my-medium.cnf my.cnf cp support-files/mysql.server bin/ and then i have edited the my.cnf and set the basedir and datadir to /home/mysql123/mysql and /home/mysql123/mysql/data and innodb_home_dir and logfile directory to datadir Now edited mysql.server and set the datadir and basedir in them properly and then initiated mysql_install_db as [mysql123@localhost mysql]$ ./scripts/mysql_install_db ./scripts/mysql_install_db: line 244: ./bin/my_print_defaults: cannot execute binary file Neither host '127.0.0.1' nor 'localhost' could be looked up with ./bin/resolveip Please configure the 'hostname' command to return a correct hostname. If you want to solve this at a later stage, restart this script with the --force option on seeing the error i thought it may be confused with basedir and executed the same as below [mysql123@localhost mysql]$ ./scripts/mysql_install_db -–user=mysql123 -–basedir=/home/mysql123/mysql ./scripts/mysql_install_db: line 244: ./bin/my_print_defaults: cannot execute binary file Neither host '127.0.0.1' nor 'localhost' could be looked up with ./bin/resolveip Please configure the 'hostname' command to return a correct hostname. If you want to solve this at a later stage, restart this script with the --force option i am not gettin what is going internally and showing this kind of message and i am sure that i have enough diskspace ( df -h ) and i have proper ownership ( chown mysq123:mysql /home/mysql123/ -R ) and proper permissions ( chmod 755 . ) and the lines in mysql_install_db are like below please any help to solve this problem is very useful ( and i have to follow the same installation process) i am using redhat 6
mysql, database, installation, redhat, database-administration
10
16,535
5
https://stackoverflow.com/questions/15719605/mysql-install-db-giving-error
15,660,887
Detect host operating system distro in chef-solo deploy bash script
When deploying a chef-solo setup you need to switch between using sudo or not eg: bash install.sh and sudo bash install.sh Depending on the distro on the host server. How can this be automated?
Detect host operating system distro in chef-solo deploy bash script When deploying a chef-solo setup you need to switch between using sudo or not eg: bash install.sh and sudo bash install.sh Depending on the distro on the host server. How can this be automated?
linux, bash, ubuntu, redhat, chef-solo
9
13,717
2
https://stackoverflow.com/questions/15660887/detect-host-operating-system-distro-in-chef-solo-deploy-bash-script
39,119,472
Rename file in docker container
I'm having a weird Error when i try to run a simple script on docker container on redhat machine, this is the Docker file From tomcat:7.0.70-jre7 ENV CLIENTNAME geocontact ADD tomcat-users.xml /usr/local/tomcat/conf/ ADD app.war /usr/local/tomcat/webapps/ COPY app.sh / ENTRYPOINT ["/app.sh"] and app.sh is the script that cause the problem "only on redhat" #!/bin/bash set -e mv /usr/local/tomcat/webapps/app.war /usr/local/tomcat/webapps/client1.war catalina.sh run and the error message : mv cannot move '/usr/local/tomcat/webapps/app.war to a subdirectory of itself, '/usr/local/tomcat/webapps/client1.war' a screenshot for the error and this only on redhat, i run the same image on ubuntu and centos with no problems.
Rename file in docker container I'm having a weird Error when i try to run a simple script on docker container on redhat machine, this is the Docker file From tomcat:7.0.70-jre7 ENV CLIENTNAME geocontact ADD tomcat-users.xml /usr/local/tomcat/conf/ ADD app.war /usr/local/tomcat/webapps/ COPY app.sh / ENTRYPOINT ["/app.sh"] and app.sh is the script that cause the problem "only on redhat" #!/bin/bash set -e mv /usr/local/tomcat/webapps/app.war /usr/local/tomcat/webapps/client1.war catalina.sh run and the error message : mv cannot move '/usr/local/tomcat/webapps/app.war to a subdirectory of itself, '/usr/local/tomcat/webapps/client1.war' a screenshot for the error and this only on redhat, i run the same image on ubuntu and centos with no problems.
linux, docker, redhat
9
55,790
3
https://stackoverflow.com/questions/39119472/rename-file-in-docker-container
64,381,744
AWS ECR Login with podman
Good morning/afternoon/night! Can you help me, please? I'm working with RHEL 8.2 and this version doesn't support Docker. I installled Podman and everything was ok until I use the following command: $(aws ecr get-login --no-include-email --region us-east-1) But, it doesn't work because it's from Docker (I thought it was from AWS Cli). The error is: # $(aws ecr get-login --no-include-email --region us-east-1) -bash: docker: command not found I've been searching for an answer and some people used a command like this: podman login -u AWS -p .... But I tried some flags and the image, but nothing is working! What is the equivalent command for podman? Thanks!
AWS ECR Login with podman Good morning/afternoon/night! Can you help me, please? I'm working with RHEL 8.2 and this version doesn't support Docker. I installled Podman and everything was ok until I use the following command: $(aws ecr get-login --no-include-email --region us-east-1) But, it doesn't work because it's from Docker (I thought it was from AWS Cli). The error is: # $(aws ecr get-login --no-include-email --region us-east-1) -bash: docker: command not found I've been searching for an answer and some people used a command like this: podman login -u AWS -p .... But I tried some flags and the image, but nothing is working! What is the equivalent command for podman? Thanks!
amazon-web-services, redhat, amazon-ecr, podman
9
15,455
3
https://stackoverflow.com/questions/64381744/aws-ecr-login-with-podman
38,926,063
How do you remove the deploymentConfig, image streams, etc using Openshift OC?
After creating a new app using oc new-app location/nameofapp , many things are created: a deploymentConfig, an imagestream, a service, etc. I know you can run oc delete <label> . I would like to know how to delete all of these given the label.
How do you remove the deploymentConfig, image streams, etc using Openshift OC? After creating a new app using oc new-app location/nameofapp , many things are created: a deploymentConfig, an imagestream, a service, etc. I know you can run oc delete <label> . I would like to know how to delete all of these given the label.
docker, openshift, redhat, kubernetes, openshift-origin
9
16,301
1
https://stackoverflow.com/questions/38926063/how-do-you-remove-the-deploymentconfig-image-streams-etc-using-openshift-oc
38,325,274
Installing multiples packages with chef
When I try to install multiples packages with a wildcard naming I got the following error: * yum_package[mysql-server] action install (up to date) * yum_package[mysql*] action install * No candidate version available for mysql* ============================================================================ ==== Error executing action install on resource 'yum_package[mysql*]' ============================================================================ ==== Recipe code is: package 'mysql-server' do action :install end package 'mysql*' do action :install end
Installing multiples packages with chef When I try to install multiples packages with a wildcard naming I got the following error: * yum_package[mysql-server] action install (up to date) * yum_package[mysql*] action install * No candidate version available for mysql* ============================================================================ ==== Error executing action install on resource 'yum_package[mysql*]' ============================================================================ ==== Recipe code is: package 'mysql-server' do action :install end package 'mysql*' do action :install end
package, chef-infra, redhat
9
12,648
2
https://stackoverflow.com/questions/38325274/installing-multiples-packages-with-chef
19,933,077
Running &quot;npm&quot; returns &quot;Error: Cannot find module &#39;inherits&#39;&quot;
module.js:340 throw err; ^ Error: Cannot find module 'inherits' at Function.Module._resolveFilename (module.js:338:15) at Function.Module._load (module.js:280:25) at Module.require (module.js:364:17) at require (module.js:380:17) at Object.<anonymous> (/usr/lib/node_modules/npmconf/npmconf.js:3:16) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.require (module.js:364:17)
Running &quot;npm&quot; returns &quot;Error: Cannot find module &#39;inherits&#39;&quot; module.js:340 throw err; ^ Error: Cannot find module 'inherits' at Function.Module._resolveFilename (module.js:338:15) at Function.Module._load (module.js:280:25) at Module.require (module.js:364:17) at require (module.js:380:17) at Object.<anonymous> (/usr/lib/node_modules/npmconf/npmconf.js:3:16) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.require (module.js:364:17)
node.js, npm, redhat, yum
9
17,881
9
https://stackoverflow.com/questions/19933077/running-npm-returns-error-cannot-find-module-inherits
27,491,881
linux command to lookup total disk and harddrive numbers
is there a command in bash that can give you the total number of disk space/harddrive numbers. I know the df command is very helpful but the output is too verbose: # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 721G 192G 492G 29% / tmpfs 129G 112K 129G 1% /dev/shm /dev/sda1 194M 92M 93M 50% /boot /dev/sdj1 917G 547M 870G 1% /data10 /dev/sdk1 917G 214G 657G 25% /data11 /dev/sdl1 917G 200M 871G 1% /data12 /dev/sdm1 917G 200M 871G 1% /data13 /dev/sdn1 917G 200M 871G 1% /data14 /dev/sdo1 917G 200M 871G 1% /data15 /dev/sdp1 917G 16G 855G 2% /data16 /dev/sdb1 917G 4.6G 866G 1% /data2 /dev/sdc1 917G 74G 797G 9% /data3 /dev/sdd1 917G 200M 871G 1% /data4 /dev/sde1 917G 200M 871G 1% /data5 /dev/sdf1 917G 200M 871G 1% /data6 /dev/sdg1 917G 764G 107G 88% /data7 /dev/sdh1 917G 51G 820G 6% /data8 /dev/sdi1 917G 19G 853G 3% /data9 /dev/sda2 193G 53G 130G 30% /home cm_processes 129G 46M 129G 1% /var/run/cloudera-scm-agent/process I basically want '16TB' in the end, is there a command handy or I have to write some program to calculate the total disk based on the output from df.
linux command to lookup total disk and harddrive numbers is there a command in bash that can give you the total number of disk space/harddrive numbers. I know the df command is very helpful but the output is too verbose: # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 721G 192G 492G 29% / tmpfs 129G 112K 129G 1% /dev/shm /dev/sda1 194M 92M 93M 50% /boot /dev/sdj1 917G 547M 870G 1% /data10 /dev/sdk1 917G 214G 657G 25% /data11 /dev/sdl1 917G 200M 871G 1% /data12 /dev/sdm1 917G 200M 871G 1% /data13 /dev/sdn1 917G 200M 871G 1% /data14 /dev/sdo1 917G 200M 871G 1% /data15 /dev/sdp1 917G 16G 855G 2% /data16 /dev/sdb1 917G 4.6G 866G 1% /data2 /dev/sdc1 917G 74G 797G 9% /data3 /dev/sdd1 917G 200M 871G 1% /data4 /dev/sde1 917G 200M 871G 1% /data5 /dev/sdf1 917G 200M 871G 1% /data6 /dev/sdg1 917G 764G 107G 88% /data7 /dev/sdh1 917G 51G 820G 6% /data8 /dev/sdi1 917G 19G 853G 3% /data9 /dev/sda2 193G 53G 130G 30% /home cm_processes 129G 46M 129G 1% /var/run/cloudera-scm-agent/process I basically want '16TB' in the end, is there a command handy or I have to write some program to calculate the total disk based on the output from df.
linux, bash, redhat
9
3,153
3
https://stackoverflow.com/questions/27491881/linux-command-to-lookup-total-disk-and-harddrive-numbers
15,907,807
linux RSS from ps RES from TOP
Linux : RedHat/Fedora What is the difference between these memory values: RES from top command RSS from ps command
linux RSS from ps RES from TOP Linux : RedHat/Fedora What is the difference between these memory values: RES from top command RSS from ps command
linux, memory, command, redhat, ps
9
11,132
1
https://stackoverflow.com/questions/15907807/linux-rss-from-ps-res-from-top
10,081,062
Django virtual host setup. Apache mod_wsgi
I am hoping there is a simple answer to my question as I am not the most experienced with python and Apache. I am trying to hook up Apache with mod_wsgi. I have used a virtual host to do so. see below: <VirtualHost *:80> ServerAdmin admin@example.com ServerName testserver.com/django #DocumentRoot / WSGIScriptAlias / /home/mycode/mysite/scripts/django.wsgi Alias /media/ /home/mycode/mysite/mysite/media/ Alias /adminmedia/ /opt/python2.7/lib/python2.7/site-packages/django/contrib/admin/media/ <Directory "/home/mycode/mysite/mysite/media"> Order deny,allow Allow from all </Directory> </VirtualHost> This works great for my django project when I go to testserver.com instead of my php index page I get my django project. What I am looking for is help with allow my php projects in /var/www/html/ and my django projects to coexist. I am trying to make it so to reach my django project I type testserver.com/django Any help or guidance is greatly appreciated :) Thanks!
Django virtual host setup. Apache mod_wsgi I am hoping there is a simple answer to my question as I am not the most experienced with python and Apache. I am trying to hook up Apache with mod_wsgi. I have used a virtual host to do so. see below: <VirtualHost *:80> ServerAdmin admin@example.com ServerName testserver.com/django #DocumentRoot / WSGIScriptAlias / /home/mycode/mysite/scripts/django.wsgi Alias /media/ /home/mycode/mysite/mysite/media/ Alias /adminmedia/ /opt/python2.7/lib/python2.7/site-packages/django/contrib/admin/media/ <Directory "/home/mycode/mysite/mysite/media"> Order deny,allow Allow from all </Directory> </VirtualHost> This works great for my django project when I go to testserver.com instead of my php index page I get my django project. What I am looking for is help with allow my php projects in /var/www/html/ and my django projects to coexist. I am trying to make it so to reach my django project I type testserver.com/django Any help or guidance is greatly appreciated :) Thanks!
django, apache, mod-wsgi, python-2.7, redhat
9
12,019
1
https://stackoverflow.com/questions/10081062/django-virtual-host-setup-apache-mod-wsgi
4,477,328
.bashrc not read when shell script is invoked from desktop shortcut
I have a simple problem understanding a behavior in linux. In short, on linux if i invoke my sh script from a 'Desktop Shortcut' then the script cannot see the latest environment variables (set in bashrc). So i was wondering that in what scope is this shell script located ? To create a testcase and reproduce: Create a simple shell script 'testme.sh' : !/bin/sh echo "Hi This is a test script checking the env var"; echo "TESTVAR = $TESTVAR"; read in echo "Done"; create a desktop shortcut for the script above. cd ~/Desktop vi mytest-desktop.desktop //Contents for mytest-desktop.desktop are : [Desktop Entry] Version=1.0 Type=Application Name=TestAbhishek Exec=/home/abhishek/test/hello.sh Terminal=true Now update your .bashrc file to set the variable export TESTVAR=test_this_variable Open a brand new terminal and execute the script using it's complete path like '~/testme.sh' //This can see the value for variable 'TESTVAR' from the .bashrc file. Now, simply double click and execute the Desktop shortcut. //This should open a terminal and print out value for 'TESTVAR' as blank. //So my question is, who is the parent for the terminal opened by this shortcut? I've tried this on RHL. Im looking for a solution or a w/a for this problem, hope someone can help soon. Thanks, Abhishek.
.bashrc not read when shell script is invoked from desktop shortcut I have a simple problem understanding a behavior in linux. In short, on linux if i invoke my sh script from a 'Desktop Shortcut' then the script cannot see the latest environment variables (set in bashrc). So i was wondering that in what scope is this shell script located ? To create a testcase and reproduce: Create a simple shell script 'testme.sh' : !/bin/sh echo "Hi This is a test script checking the env var"; echo "TESTVAR = $TESTVAR"; read in echo "Done"; create a desktop shortcut for the script above. cd ~/Desktop vi mytest-desktop.desktop //Contents for mytest-desktop.desktop are : [Desktop Entry] Version=1.0 Type=Application Name=TestAbhishek Exec=/home/abhishek/test/hello.sh Terminal=true Now update your .bashrc file to set the variable export TESTVAR=test_this_variable Open a brand new terminal and execute the script using it's complete path like '~/testme.sh' //This can see the value for variable 'TESTVAR' from the .bashrc file. Now, simply double click and execute the Desktop shortcut. //This should open a terminal and print out value for 'TESTVAR' as blank. //So my question is, who is the parent for the terminal opened by this shortcut? I've tried this on RHL. Im looking for a solution or a w/a for this problem, hope someone can help soon. Thanks, Abhishek.
linux, bash, redhat, shortcut-file
9
5,392
1
https://stackoverflow.com/questions/4477328/bashrc-not-read-when-shell-script-is-invoked-from-desktop-shortcut
2,364,563
Does ACL on Linux impact performance
We are planning to implement ACL on our Linux platform. Only one particular group is going to come under ACL. This group would have at the max 20 users. All of the restrictions would be at directory level (not at file name level) Would this show any impact on the server's performance/responsiveness?
Does ACL on Linux impact performance We are planning to implement ACL on our Linux platform. Only one particular group is going to come under ACL. This group would have at the max 20 users. All of the restrictions would be at directory level (not at file name level) Would this show any impact on the server's performance/responsiveness?
linux, performance, security, acl, redhat
9
2,301
2
https://stackoverflow.com/questions/2364563/does-acl-on-linux-impact-performance
67,539,305
Keycloak Docker image basic unix commands not available
I have setup my Keycloak identification server by running a .yml file that uses the docker image jboss/keycloak:9.0.0 . Now I want get inside the container and modify some files in order to make some testing. Unfortunately, after I got inside the running container, I realized that some very basic UNIX commands like sudo or vi (and many many more) aren't found (as well as commands like apt-get or yum which I used to download command packages and failed). According to this question , it seems that the underlying OS of the container ( Redhat Universal Base Image ) uses the command microdnf to manage software, but unfortunately when I tried to use this command to do any action I got the following message: error: Failed to create: /var/cache/yum/metadata Could you please propose any workaround for my case? I just need to use a text editor command like vi , and root privileges for my user (so commands like sudo , su , or chmod ). Thanks in advance.
Keycloak Docker image basic unix commands not available I have setup my Keycloak identification server by running a .yml file that uses the docker image jboss/keycloak:9.0.0 . Now I want get inside the container and modify some files in order to make some testing. Unfortunately, after I got inside the running container, I realized that some very basic UNIX commands like sudo or vi (and many many more) aren't found (as well as commands like apt-get or yum which I used to download command packages and failed). According to this question , it seems that the underlying OS of the container ( Redhat Universal Base Image ) uses the command microdnf to manage software, but unfortunately when I tried to use this command to do any action I got the following message: error: Failed to create: /var/cache/yum/metadata Could you please propose any workaround for my case? I just need to use a text editor command like vi , and root privileges for my user (so commands like sudo , su , or chmod ). Thanks in advance.
docker, keycloak, redhat
9
8,621
2
https://stackoverflow.com/questions/67539305/keycloak-docker-image-basic-unix-commands-not-available
23,523,634
Compiling C++11 on g++ 4.4.7 in Red Hat linux
I have already tried: g++ -std=c++11 my_file.cpp -o my_prog g++ -std=c++0x ... g++ -std=gnu++0x ... and I keep getting this message: error: unrecognized command line option
Compiling C++11 on g++ 4.4.7 in Red Hat linux I have already tried: g++ -std=c++11 my_file.cpp -o my_prog g++ -std=c++0x ... g++ -std=gnu++0x ... and I keep getting this message: error: unrecognized command line option
c++11, compilation, g++, redhat
9
27,039
2
https://stackoverflow.com/questions/23523634/compiling-c11-on-g-4-4-7-in-red-hat-linux
12,460,694
Vagrant and Red Hat Enterprise Licensing
Our team is starting to use Vagrant for development on Mac OS X machines so we can better simulate our Red Hat Enterprise Linux production environment. Our operations group says our Red Hat License only covers instances being run on our VMWare cluster. How do other people deal with RHEL licensing using Vagrant?
Vagrant and Red Hat Enterprise Licensing Our team is starting to use Vagrant for development on Mac OS X machines so we can better simulate our Red Hat Enterprise Linux production environment. Our operations group says our Red Hat License only covers instances being run on our VMWare cluster. How do other people deal with RHEL licensing using Vagrant?
redhat, vagrant, rhel
9
4,247
4
https://stackoverflow.com/questions/12460694/vagrant-and-red-hat-enterprise-licensing
65,333,484
Using a keystore with curl
I would like to execute the below curl command and specify my own key store. I tried using --cacert option and specified the path of the cacert jks. curl --ssl-reqd --url 'smtp://mailhost.myorg.com:587' --user 'usrid:pwd' --mail-from 'fromaddr@myorg.com' --mail-rcpt 'toaddr@myorg.com' --upload-file mail.txt -vv --cacert /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.272.b10-1.el7_9.x86_64/jre/lib/security/cacerts But it resulted in an error. curl: (77) Problem with the SSL CA cert (path? access rights?)
Using a keystore with curl I would like to execute the below curl command and specify my own key store. I tried using --cacert option and specified the path of the cacert jks. curl --ssl-reqd --url 'smtp://mailhost.myorg.com:587' --user 'usrid:pwd' --mail-from 'fromaddr@myorg.com' --mail-rcpt 'toaddr@myorg.com' --upload-file mail.txt -vv --cacert /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.272.b10-1.el7_9.x86_64/jre/lib/security/cacerts But it resulted in an error. curl: (77) Problem with the SSL CA cert (path? access rights?)
shell, curl, openshift, redhat
9
59,978
3
https://stackoverflow.com/questions/65333484/using-a-keystore-with-curl
38,091,418
Installation of R 3.3.1 in Red Hat. LZMA version &gt;=5.0.3 required
I am installing R 3.3.1 from source. During ./configure --enable-R-shlib execution, error pops up: checking for lzma_version_number in -llzma... yes checking lzma.h usability... yes checking lzma.h presence... yes checking for lzma.h... yes checking if lzma version >= 5.0.3... no configure: error: "liblzma library and headers are required" I see that there is no LZMA version 5.0.3 available and is currently available through XZ Utils . Tukaani XZ Utils I installed the XZ 5.2.2 but the error is still showing up.
Installation of R 3.3.1 in Red Hat. LZMA version &gt;=5.0.3 required I am installing R 3.3.1 from source. During ./configure --enable-R-shlib execution, error pops up: checking for lzma_version_number in -llzma... yes checking lzma.h usability... yes checking lzma.h presence... yes checking for lzma.h... yes checking if lzma version >= 5.0.3... no configure: error: "liblzma library and headers are required" I see that there is no LZMA version 5.0.3 available and is currently available through XZ Utils . Tukaani XZ Utils I installed the XZ 5.2.2 but the error is still showing up.
r, redhat, lzma, xz
9
7,113
1
https://stackoverflow.com/questions/38091418/installation-of-r-3-3-1-in-red-hat-lzma-version-5-0-3-required
20,236,726
Unable to install Devtools package for R studio mounted on linux redhat server
I'm unable to install the devtools package in R Studio on a redhat linux server. These error messages showed up: ERROR: configuration failed for package ‘RCurl’ * removing ‘/home/xx/R/x86_64-redhat-linux-gnu-library/3.0/RCurl’ Warning in install.packages : installation of package ‘RCurl’ had non-zero exit status ERROR: dependency ‘RCurl’ is not available for package ‘httr’ * removing ‘/home/xx/R/x86_64-redhat-linux-gnu-library/3.0/httr’ Warning in install.packages : installation of package ‘httr’ had non-zero exit status ERROR: dependencies ‘httr’, ‘RCurl’ are not available for package ‘devtools’ I can't install the RCurl package too. I've tried to install the libcurl libraries too: sudo yum install libcurl4-openssl-dev sudo yum install libcurl4-gnutls-dev But the system says no such packages are available available. Is there any other way to install the devtools package? Or how can I resolve the Rcurl installation issue?
Unable to install Devtools package for R studio mounted on linux redhat server I'm unable to install the devtools package in R Studio on a redhat linux server. These error messages showed up: ERROR: configuration failed for package ‘RCurl’ * removing ‘/home/xx/R/x86_64-redhat-linux-gnu-library/3.0/RCurl’ Warning in install.packages : installation of package ‘RCurl’ had non-zero exit status ERROR: dependency ‘RCurl’ is not available for package ‘httr’ * removing ‘/home/xx/R/x86_64-redhat-linux-gnu-library/3.0/httr’ Warning in install.packages : installation of package ‘httr’ had non-zero exit status ERROR: dependencies ‘httr’, ‘RCurl’ are not available for package ‘devtools’ I can't install the RCurl package too. I've tried to install the libcurl libraries too: sudo yum install libcurl4-openssl-dev sudo yum install libcurl4-gnutls-dev But the system says no such packages are available available. Is there any other way to install the devtools package? Or how can I resolve the Rcurl installation issue?
linux, r, redhat, rcurl, devtools
9
8,175
1
https://stackoverflow.com/questions/20236726/unable-to-install-devtools-package-for-r-studio-mounted-on-linux-redhat-server
45,985,641
Can TensorFlow run with multiple CPUs (no GPUs)?
I'm trying to learn distributed TensorFlow. Tried out a piece code as explained here : with tf.device("/cpu:0"): W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) with tf.device("/cpu:1"): y = tf.nn.softmax(tf.matmul(x, W) + b) loss = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) Getting the following error: tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation 'MatMul': Operation was explicitly assigned to /device:CPU:1 but available devices are [ /job:localhost/replica:0/task:0/cpu:0 ]. Make sure the device specification refers to a valid device. [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/device:CPU:1"](Placeholder, Variable/read)]] Meaning that TensorFlow does not recognize CPU:1 . I'm running on a RedHat server with 40 CPUs ( cat /proc/cpuinfo | grep processor | wc -l ). Any ideas?
Can TensorFlow run with multiple CPUs (no GPUs)? I'm trying to learn distributed TensorFlow. Tried out a piece code as explained here : with tf.device("/cpu:0"): W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) with tf.device("/cpu:1"): y = tf.nn.softmax(tf.matmul(x, W) + b) loss = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) Getting the following error: tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation 'MatMul': Operation was explicitly assigned to /device:CPU:1 but available devices are [ /job:localhost/replica:0/task:0/cpu:0 ]. Make sure the device specification refers to a valid device. [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/device:CPU:1"](Placeholder, Variable/read)]] Meaning that TensorFlow does not recognize CPU:1 . I'm running on a RedHat server with 40 CPUs ( cat /proc/cpuinfo | grep processor | wc -l ). Any ideas?
python, python-2.7, tensorflow, redhat
9
9,005
2
https://stackoverflow.com/questions/45985641/can-tensorflow-run-with-multiple-cpus-no-gpus
29,789,837
Difference between Java and Oracle Java for Redhat
I want to update my jdk for some security reasons in Redhat system and updated to jdk7u79 successfully. Redhat has published some java vulnerabilities in their site with the name Oracle java for RHEL Server . Do I need to update my jdk as mentioned in the RHEL site? Is jdk from oracle site is different from Oracle java for RHEL Server. Reference
Difference between Java and Oracle Java for Redhat I want to update my jdk for some security reasons in Redhat system and updated to jdk7u79 successfully. Redhat has published some java vulnerabilities in their site with the name Oracle java for RHEL Server . Do I need to update my jdk as mentioned in the RHEL site? Is jdk from oracle site is different from Oracle java for RHEL Server. Reference
java, redhat
9
7,630
1
https://stackoverflow.com/questions/29789837/difference-between-java-and-oracle-java-for-redhat
49,422,149
How to get the details of the user deleted in keycloak using AdminEvent
i have below code that gets executed when an admin is creating or deleting a user in the keycloak UI. Through the help of the adminEvent: [URL] Creating a user returns the user details via adminEvent.getRepresentation(). However when deleting a user returns me a null. This is also the same when deleting a role, deleting a group or deleting a user_session.(ResourceTypes) My question is how can i retrieve the deleted details? import org.keycloak.events.admin.AdminEvent; import org.keycloak.models.UserModel; public void handleResourceOperation(AdminEvent adminEvent, UserModel user) { MQMessage queueMessage = new MQMessage(); queueMessage.setIpAddress(adminEvent.getAuthDetails().getIpAddress()); queueMessage.setUsername(user.getUsername()); switch (adminEvent.getOperationType()) { case CREATE: LOGGER.info("OPERATION : CREATE USER"); LOGGER.info("USER Representation : " + adminEvent.getRepresentation()); String[] split = adminEvent.getRepresentation().split(","); queueMessage.setTransactionDetail("Created user " + split[0].substring(12)); sendQueueMessage(adminEvent, queueMessage); break; case DELETE: LOGGER.info("OPERATION : DELETE USER"); LOGGER.info("USER Representation : " + adminEvent.getRepresentation()); queueMessage.setTransactionDetail("User has been deleted."); sendQueueMessage(adminEvent, queueMessage); break; }
How to get the details of the user deleted in keycloak using AdminEvent i have below code that gets executed when an admin is creating or deleting a user in the keycloak UI. Through the help of the adminEvent: [URL] Creating a user returns the user details via adminEvent.getRepresentation(). However when deleting a user returns me a null. This is also the same when deleting a role, deleting a group or deleting a user_session.(ResourceTypes) My question is how can i retrieve the deleted details? import org.keycloak.events.admin.AdminEvent; import org.keycloak.models.UserModel; public void handleResourceOperation(AdminEvent adminEvent, UserModel user) { MQMessage queueMessage = new MQMessage(); queueMessage.setIpAddress(adminEvent.getAuthDetails().getIpAddress()); queueMessage.setUsername(user.getUsername()); switch (adminEvent.getOperationType()) { case CREATE: LOGGER.info("OPERATION : CREATE USER"); LOGGER.info("USER Representation : " + adminEvent.getRepresentation()); String[] split = adminEvent.getRepresentation().split(","); queueMessage.setTransactionDetail("Created user " + split[0].substring(12)); sendQueueMessage(adminEvent, queueMessage); break; case DELETE: LOGGER.info("OPERATION : DELETE USER"); LOGGER.info("USER Representation : " + adminEvent.getRepresentation()); queueMessage.setTransactionDetail("User has been deleted."); sendQueueMessage(adminEvent, queueMessage); break; }
java, spring, redhat, keycloak, keycloak-services
9
2,830
1
https://stackoverflow.com/questions/49422149/how-to-get-the-details-of-the-user-deleted-in-keycloak-using-adminevent
33,731,366
Error while importing Tensorflow in python2.7 in Red Hat release 6.6. &#39;GLIBC_2.17 not found&#39;
This is essentially a repeat of question asked here . However, I am using Red Hat Version 6.6, which has glibc 2.12 (glibc 2.17, I think was introduced with RHEL ver 7). Is it possible to install tensorflow locally, without upgrading OS. (I don't have admin privileges). This is the error I am getting ImportError: /lib64/libc.so.6: version `GLIBC_2.17' not found (required by /data02/storage/kgupt33/.local/anaconda/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so)
Error while importing Tensorflow in python2.7 in Red Hat release 6.6. &#39;GLIBC_2.17 not found&#39; This is essentially a repeat of question asked here . However, I am using Red Hat Version 6.6, which has glibc 2.12 (glibc 2.17, I think was introduced with RHEL ver 7). Is it possible to install tensorflow locally, without upgrading OS. (I don't have admin privileges). This is the error I am getting ImportError: /lib64/libc.so.6: version `GLIBC_2.17' not found (required by /data02/storage/kgupt33/.local/anaconda/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so)
python, python-2.7, redhat, glibc, tensorflow
9
6,722
1
https://stackoverflow.com/questions/33731366/error-while-importing-tensorflow-in-python2-7-in-red-hat-release-6-6-glibc-2-1
39,446,546
glusterfs volume creation failed - brick is already part of volume
In a cloud , we have a cluster of glusterfs nodes (participating in gluster volume) and clients (that mount to gluster volumes). These nodes are created using terraform hashicorp tool. Once the cluster is up and running, if we want to change the gluster machine configuration like increasing the compute size from 4 cpus to 8 cpus , terraform has the provision to recreate the nodes with new configuration.So the existing gluster nodes are destroyed and new instances are created but with the same ip. In the newly created instance , volume creation command fails saying brick is already part of volume. sudo gluster volume create VolName replica 2 transport tcp ip1:/mnt/ppshare/brick0 ip2:/mnt/ppshare/brick0 volume create: VolName: failed: /mnt/ppshare/brick0 is already part of a volume But no volumes are present in this instance. I understand if I have to expand or shrink volume, I can add or remove bricks from existing volume. Here, I'm changing the compute of the node and hence it has to be recreated. I don't understand why it should say brick is already part of volume as it is a new machine altogether. It would be very helpful if someone can explain why it says Brick is already part of volume and where it is storing the volume/brick information. So that I can recreate the volume successfully. I also tried the below steps from this link to clear the glusterfs volume related attributes from the mount but no luck. [URL] . apt-get install attr cd /glusterfs for i in attr -lq . ; do setfattr -x trusted.$i .; done attr -lq /glusterfs (for testing, the output should pe empty)
glusterfs volume creation failed - brick is already part of volume In a cloud , we have a cluster of glusterfs nodes (participating in gluster volume) and clients (that mount to gluster volumes). These nodes are created using terraform hashicorp tool. Once the cluster is up and running, if we want to change the gluster machine configuration like increasing the compute size from 4 cpus to 8 cpus , terraform has the provision to recreate the nodes with new configuration.So the existing gluster nodes are destroyed and new instances are created but with the same ip. In the newly created instance , volume creation command fails saying brick is already part of volume. sudo gluster volume create VolName replica 2 transport tcp ip1:/mnt/ppshare/brick0 ip2:/mnt/ppshare/brick0 volume create: VolName: failed: /mnt/ppshare/brick0 is already part of a volume But no volumes are present in this instance. I understand if I have to expand or shrink volume, I can add or remove bricks from existing volume. Here, I'm changing the compute of the node and hence it has to be recreated. I don't understand why it should say brick is already part of volume as it is a new machine altogether. It would be very helpful if someone can explain why it says Brick is already part of volume and where it is storing the volume/brick information. So that I can recreate the volume successfully. I also tried the below steps from this link to clear the glusterfs volume related attributes from the mount but no luck. [URL] . apt-get install attr cd /glusterfs for i in attr -lq . ; do setfattr -x trusted.$i .; done attr -lq /glusterfs (for testing, the output should pe empty)
redhat, glusterfs
8
28,087
3
https://stackoverflow.com/questions/39446546/glusterfs-volume-creation-failed-brick-is-already-part-of-volume
60,970,697
Docker install failing in linux with error [Errno 14] HTTPS Error 404 - Not Found
I am trying to install docker in linux [Redhat] box . But its failing with below error . Loaded plugins: product-id, search-disabled-repos, subscription-manager, susemanagerplugin, yumnotify This system is not registered with an entitlement server. You can use subscription-manager to register. [URL] [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. To address this issue please refer to the below knowledge base article [URL] If above article doesn't help to resolve this issue please open a ticket with Red Hat Support. One of the configured repositories failed (Docker Repository), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=dockerrepo ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable dockerrepo or subscription-manager repos --disable=dockerrepo 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=dockerrepo.skip_if_unavailable=true failure: repodata/repomd.xml from dockerrepo: [Errno 256] No more mirrors to try. [URL] [Errno 14] HTTPS Error 404 - Not Found I have gone through the multiple articles and tried different options but it didn't resolved the issue. Few of the articles and thing that i have tried are as [URL] [URL]
Docker install failing in linux with error [Errno 14] HTTPS Error 404 - Not Found I am trying to install docker in linux [Redhat] box . But its failing with below error . Loaded plugins: product-id, search-disabled-repos, subscription-manager, susemanagerplugin, yumnotify This system is not registered with an entitlement server. You can use subscription-manager to register. [URL] [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. To address this issue please refer to the below knowledge base article [URL] If above article doesn't help to resolve this issue please open a ticket with Red Hat Support. One of the configured repositories failed (Docker Repository), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=dockerrepo ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable dockerrepo or subscription-manager repos --disable=dockerrepo 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=dockerrepo.skip_if_unavailable=true failure: repodata/repomd.xml from dockerrepo: [Errno 256] No more mirrors to try. [URL] [Errno 14] HTTPS Error 404 - Not Found I have gone through the multiple articles and tried different options but it didn't resolved the issue. Few of the articles and thing that i have tried are as [URL] [URL]
linux, docker, redhat
8
29,297
5
https://stackoverflow.com/questions/60970697/docker-install-failing-in-linux-with-error-errno-14-https-error-404-not-foun
24,708,213
Install R on RedHat errors on dependencies that don&#39;t exist
I have installed R before on a machine running RedHat EL6.5, but I recently had a problem installing new packages (i.e. install.packages()). Since I couldn't find a solution to this, I tried reinstalling R using: sudo yum remove R and sudo yum install R But now I get: .... ---> Package R-core-devel.x86_64 0:3.1.0-5.el6 will be installed --> Processing Dependency: blas-devel >= 3.0 for package: R-core-devel-3.1.0-5.el6.x86_64 --> Processing Dependency: libicu-devel for package: R-core-devel-3.1.0-5.el6.x86_64 --> Processing Dependency: lapack-devel for package: R-core-devel-3.1.0-5.el6.x86_64 ---> Package xz-devel.x86_64 0:4.999.9-0.3.beta.20091007git.el6 will be installed --> Finished Dependency Resolution Error: Package: R-core-devel-3.1.0-5.el6.x86_64 (epel) Requires: blas-devel >= 3.0 Error: Package: R-core-devel-3.1.0-5.el6.x86_64 (epel) Requires: lapack-devel Error: Package: R-core-devel-3.1.0-5.el6.x86_64 (epel) Requires: libicu-devel You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I already checked, and blas-devel is installed, but the newest version is 0.2.8. Checked using: yum info openblas-devel.x86_64 Any thoughts as to what is going wrong? Thanks.
Install R on RedHat errors on dependencies that don&#39;t exist I have installed R before on a machine running RedHat EL6.5, but I recently had a problem installing new packages (i.e. install.packages()). Since I couldn't find a solution to this, I tried reinstalling R using: sudo yum remove R and sudo yum install R But now I get: .... ---> Package R-core-devel.x86_64 0:3.1.0-5.el6 will be installed --> Processing Dependency: blas-devel >= 3.0 for package: R-core-devel-3.1.0-5.el6.x86_64 --> Processing Dependency: libicu-devel for package: R-core-devel-3.1.0-5.el6.x86_64 --> Processing Dependency: lapack-devel for package: R-core-devel-3.1.0-5.el6.x86_64 ---> Package xz-devel.x86_64 0:4.999.9-0.3.beta.20091007git.el6 will be installed --> Finished Dependency Resolution Error: Package: R-core-devel-3.1.0-5.el6.x86_64 (epel) Requires: blas-devel >= 3.0 Error: Package: R-core-devel-3.1.0-5.el6.x86_64 (epel) Requires: lapack-devel Error: Package: R-core-devel-3.1.0-5.el6.x86_64 (epel) Requires: libicu-devel You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I already checked, and blas-devel is installed, but the newest version is 0.2.8. Checked using: yum info openblas-devel.x86_64 Any thoughts as to what is going wrong? Thanks.
r, redhat, yum
8
17,648
3
https://stackoverflow.com/questions/24708213/install-r-on-redhat-errors-on-dependencies-that-dont-exist
13,053,272
An error occured while installing charlock_holmes libicu
I'm trying to install Gitlab following this install script , but am running into an issue where the charlock_holmes gem fails to install. I'm not familiar with Ruby. My charlock_holmes-0.6.8 gem_make.out file is below. /home/gitlabuser/.rvm/rubies/ruby-1.9.2-p290/bin/ruby extconf.rb checking for main() in -licui18n... no which: no brew in (/home/gitlabuser/.rvm/gems/ruby-1.9.2-p290/bin:/home/gitlabuser/.rvm/gems/ruby-1.9.2-p290@global/bin:/home/gitlabuser/.rvm/rubies/ruby-1.9.2-p290/bin:/home/gitlabuser/.rvm/gems/ruby-1.9.2-p290/bin:/home/gitlabuser/.rvm/gems/ruby-1.9.2-p290@global/bin:/home/gitlabuser/.rvm/rubies/ruby-1.9.2-p290/bin:/home/gitlabuser/.rvm/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/gitlabuser/bin:/usr/lib64/qt4/bin/) checking for main() in -licui18n... no *************************************************************************************** *********** icu required (brew install icu4c or apt-get install libicu-dev) *********** *************************************************************************************** *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/home/gitlabuser/.rvm/rubies/ruby-1.9.2-p290/bin/ruby --with-icu-dir --without-icu-dir --with-icu-include --without-icu-include=${icu-dir}/include --with-icu-lib --without-icu-lib=${icu-dir}/lib --with-icui18nlib --without-icui18nlib --with-icui18nlib --without-icui18nlib I have the libicu.x86_64 package installed (and also tried the libicu.i686 when I ran into problems, but uninstalled it after it didn't work). It appears the libicu package isn't including the files required by the charlock_holmes gem, but there aren't any devel packages available. Any suggestions?
An error occured while installing charlock_holmes libicu I'm trying to install Gitlab following this install script , but am running into an issue where the charlock_holmes gem fails to install. I'm not familiar with Ruby. My charlock_holmes-0.6.8 gem_make.out file is below. /home/gitlabuser/.rvm/rubies/ruby-1.9.2-p290/bin/ruby extconf.rb checking for main() in -licui18n... no which: no brew in (/home/gitlabuser/.rvm/gems/ruby-1.9.2-p290/bin:/home/gitlabuser/.rvm/gems/ruby-1.9.2-p290@global/bin:/home/gitlabuser/.rvm/rubies/ruby-1.9.2-p290/bin:/home/gitlabuser/.rvm/gems/ruby-1.9.2-p290/bin:/home/gitlabuser/.rvm/gems/ruby-1.9.2-p290@global/bin:/home/gitlabuser/.rvm/rubies/ruby-1.9.2-p290/bin:/home/gitlabuser/.rvm/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/gitlabuser/bin:/usr/lib64/qt4/bin/) checking for main() in -licui18n... no *************************************************************************************** *********** icu required (brew install icu4c or apt-get install libicu-dev) *********** *************************************************************************************** *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/home/gitlabuser/.rvm/rubies/ruby-1.9.2-p290/bin/ruby --with-icu-dir --without-icu-dir --with-icu-include --without-icu-include=${icu-dir}/include --with-icu-lib --without-icu-lib=${icu-dir}/lib --with-icui18nlib --without-icui18nlib --with-icui18nlib --without-icui18nlib I have the libicu.x86_64 package installed (and also tried the libicu.i686 when I ran into problems, but uninstalled it after it didn't work). It appears the libicu package isn't including the files required by the charlock_holmes gem, but there aren't any devel packages available. Any suggestions?
redhat, rhel, gitlab, bundle-install
8
7,737
5
https://stackoverflow.com/questions/13053272/an-error-occured-while-installing-charlock-holmes-libicu
4,684,279
Installing Mercurial on Redhat Linux
Will Mercurial work on Redhat Linux? I tried, yum install mercurial, with no success. I tried downloading a tar ball from Mercurial site but it failed when I tried to install. Does Mercurial work at all on Redhat?
Installing Mercurial on Redhat Linux Will Mercurial work on Redhat Linux? I tried, yum install mercurial, with no success. I tried downloading a tar ball from Mercurial site but it failed when I tried to install. Does Mercurial work at all on Redhat?
linux, mercurial, installation, redhat
8
16,569
3
https://stackoverflow.com/questions/4684279/installing-mercurial-on-redhat-linux
1,103,308
install svn client on redhat RHEL5
How do I install svn on a Redhat machine? Tried to do it with yum install svn - but it didn't find svn. My machine details are Red Hat Enterprise Linux Server release 5.2 (Tikanga) Found it with this command /etc/redhat-release Thanks
install svn client on redhat RHEL5 How do I install svn on a Redhat machine? Tried to do it with yum install svn - but it didn't find svn. My machine details are Red Hat Enterprise Linux Server release 5.2 (Tikanga) Found it with this command /etc/redhat-release Thanks
linux, svn, installation, redhat
8
23,268
1
https://stackoverflow.com/questions/1103308/install-svn-client-on-redhat-rhel5
56,487,210
How to get systemd variables to survive a reboot?
I have a product provided by a third party vendor. It includes many services for which they provide initd style startup scripts. There is one script for each service provided. These scripts reference variables like JAVA_HOME, THE_PRODUCT_HOME and so on. The expectation from the vendor that I must edit these scripts manually and hard code the correct values. I would rather that these variables be initialised from environmental variables obtained from systemd when the system boots. I know I can create an override configuration file for each of the services to provide the necessary envirables (a.k.a. environmental variables) using systemctl edit theService but: There are quite a few startup scripts The base variables are all the same I would like to avoid "systemctl edit"ing each of the supplied scripts if I can So far I've tried using systemctl set-environment VAR_NAME=some_value . This works perfectly - until I restart the system. It seems like the variables set this way are globally defined, but do not survive a reboot. I've also tried using systemctl daemon-reload just in case that is needed to "commit" the settings (but it doesn't seem to save the global envirables). For now, I've edited each one of the supplied startup scripts and source /path/to/theGlobalVariablesINeed.sh This works fine as a workaround but is not my preferred solution going forward... Here is an illustration of what is happening: define some variables [root@dav1-td1 -> ~] # systemctl show-environment LANG=en_US.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin [root@dav1-td1 -> ~] # [root@dav1-td1 -> ~] # systemctl set-environment SYSD_PRODNAME_JAVA_HOME=/usr/java/jdk1.8.0_181-amd64/jre [root@dav1-td1 -> ~] # systemctl set-environment SYSD_PRODNAME_HOME=/opt/TheProduct-1.2.3 [root@dav1-td1 -> ~] # systemctl daemon-reload # This is optional, if I run the reload, or do not run the reload, the variables are still lost over a reboot. demonstrate that the variables are set. #### Now some variables are set, If I restart a service, the service will #### Pick up these environmental variable settings. [root@dav1-td1 -> ~] # systemctl show-environment LANG=en_US.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin SYSD_PRODNAME_HOME=/opt/TheProduct-1.2.3 SYSD_PRODNAME_JAVA_HOME=/usr/java/jdk1.8.0_181-amd64/jre [root@dav1-td1 -> ~] # System restart #### After restart, the variables have disappeared !?!? [root@dav1-td1 -> ~] # systemctl show-environment LANG=en_US.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin [root@dav1-td1 -> ~] # As mentioned above, when I restart the system, any envirables I set using systemctl set-environment VAR=value are lost. I need these variables to survive a restart (without using per service override files and without having to source a file that contains all of the variables)
How to get systemd variables to survive a reboot? I have a product provided by a third party vendor. It includes many services for which they provide initd style startup scripts. There is one script for each service provided. These scripts reference variables like JAVA_HOME, THE_PRODUCT_HOME and so on. The expectation from the vendor that I must edit these scripts manually and hard code the correct values. I would rather that these variables be initialised from environmental variables obtained from systemd when the system boots. I know I can create an override configuration file for each of the services to provide the necessary envirables (a.k.a. environmental variables) using systemctl edit theService but: There are quite a few startup scripts The base variables are all the same I would like to avoid "systemctl edit"ing each of the supplied scripts if I can So far I've tried using systemctl set-environment VAR_NAME=some_value . This works perfectly - until I restart the system. It seems like the variables set this way are globally defined, but do not survive a reboot. I've also tried using systemctl daemon-reload just in case that is needed to "commit" the settings (but it doesn't seem to save the global envirables). For now, I've edited each one of the supplied startup scripts and source /path/to/theGlobalVariablesINeed.sh This works fine as a workaround but is not my preferred solution going forward... Here is an illustration of what is happening: define some variables [root@dav1-td1 -> ~] # systemctl show-environment LANG=en_US.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin [root@dav1-td1 -> ~] # [root@dav1-td1 -> ~] # systemctl set-environment SYSD_PRODNAME_JAVA_HOME=/usr/java/jdk1.8.0_181-amd64/jre [root@dav1-td1 -> ~] # systemctl set-environment SYSD_PRODNAME_HOME=/opt/TheProduct-1.2.3 [root@dav1-td1 -> ~] # systemctl daemon-reload # This is optional, if I run the reload, or do not run the reload, the variables are still lost over a reboot. demonstrate that the variables are set. #### Now some variables are set, If I restart a service, the service will #### Pick up these environmental variable settings. [root@dav1-td1 -> ~] # systemctl show-environment LANG=en_US.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin SYSD_PRODNAME_HOME=/opt/TheProduct-1.2.3 SYSD_PRODNAME_JAVA_HOME=/usr/java/jdk1.8.0_181-amd64/jre [root@dav1-td1 -> ~] # System restart #### After restart, the variables have disappeared !?!? [root@dav1-td1 -> ~] # systemctl show-environment LANG=en_US.UTF-8 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin [root@dav1-td1 -> ~] # As mentioned above, when I restart the system, any envirables I set using systemctl set-environment VAR=value are lost. I need these variables to survive a restart (without using per service override files and without having to source a file that contains all of the variables)
linux, environment-variables, redhat, systemd
8
3,365
1
https://stackoverflow.com/questions/56487210/how-to-get-systemd-variables-to-survive-a-reboot