question_id
int64
82.3k
79.7M
title_clean
stringlengths
15
158
body_clean
stringlengths
62
28.5k
full_text
stringlengths
95
28.5k
tags
stringlengths
4
80
score
int64
0
1.15k
view_count
int64
22
1.62M
answer_count
int64
0
30
link
stringlengths
58
125
12,034,899
Remote desktop connectivity from Windows 7 to Red Hat Enterprise Linux 6
What would be the best way to establish remote desktop connectivity from a Windows 7 machine to Red Hat Enterprise Linux 6? The machines reside on the same network.
Remote desktop connectivity from Windows 7 to Red Hat Enterprise Linux 6 What would be the best way to establish remote desktop connectivity from a Windows 7 machine to Red Hat Enterprise Linux 6? The machines reside on the same network.
windows-7, remote-desktop, redhat
8
67,930
3
https://stackoverflow.com/questions/12034899/remote-desktop-connectivity-from-windows-7-to-red-hat-enterprise-linux-6
57,221,919
install docker-ce on redhat 8
I try to install docker-ce on redhat 8 but it failed first, I try # systemctl enable docker Failed to enable unit: Unit file docker.service does not exist. So, I want to install docker-ce for the daemon # yum install yum-utils # yum-config-manager --add-repo [URL] # yum repolist -v # yum list docker-ce --showduplicates | sort -r # yum install docker-ce but in this step, I have got this : # yum install docker-ce Updating Subscription Management repositories. Unable to read consumer identity This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Last metadata expiration check: 0:02:58 ago on Fri 26 Jul 2019 02:11:48 PM UTC. Error: Problem: package docker-ce-3:19.03.1-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed - cannot install the best candidate for the job - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded - package containerd.io-1.2.2-3.el7.x86_64 is excluded - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) I create a redhat account, but I have got this problem : # subscription-manager register --force Registering to: subscription.rhsm.redhat.com:443/subscription Username: xxxxxxxxxxx Password: The system has been registered with ID: 6c07b574-2601-4a84-90d4-a9dfdc499c2f The registered system name is: ip-172-31-11-95.us-east-2.compute.internal Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 566, in load ret = self._repo.load() File "/usr/lib64/python3.6/site-packages/libdnf/repo.py", line 503, in load return _repo.Repo_load(self) RuntimeError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 173, in update_check self._sync_with_server(uep, consumer_uuid) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 477, in _sync_with_server combined_profile = self.current_profile File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 430, in current_profile module_profile = get_profile('modulemd').collect() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 347, in get_profile profile = PROFILE_MAP[profile_type]() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 54, in __init__ self.content = self.__generate() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 76, in __generate base.fill_sack() File "/usr/lib/python3.6/site-packages/dnf/base.py", line 400, in fill_sack self._add_repo_to_sack(r) File "/usr/lib/python3.6/site-packages/dnf/base.py", line 135, in _add_repo_to_sack repo.load() File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 568, in load raise dnf.exceptions.RepoError(str(e)) dnf.exceptions.RepoError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/sbin/subscription-manager", line 11, in <module> load_entry_point('subscription-manager==1.23.8', 'console_scripts', 'subscription-manager')() File "/usr/lib64/python3.6/site-packages/subscription_manager/scripts/subscription_manager.py", line 85, in main return managercli.ManagerCLI().main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 2918, in main ret = CLI.main(self) File "/usr/lib64/python3.6/site-packages/subscription_manager/cli.py", line 183, in main return cmd.main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 506, in main return_code = self._do_command() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 1368, in _do_command profile_mgr.update_check(self.cp, consumer['uuid'], True) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 457, in update_check return CacheManager.update_check(self, uep, consumer_uuid, force) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 183, in update_check raise Exception(_("Error updating system data on the server, see /var/log/rhsm/rhsm.log " Exception: Error updating system data on the server, see /var/log/rhsm/rhsm.log for more details.
install docker-ce on redhat 8 I try to install docker-ce on redhat 8 but it failed first, I try # systemctl enable docker Failed to enable unit: Unit file docker.service does not exist. So, I want to install docker-ce for the daemon # yum install yum-utils # yum-config-manager --add-repo [URL] # yum repolist -v # yum list docker-ce --showduplicates | sort -r # yum install docker-ce but in this step, I have got this : # yum install docker-ce Updating Subscription Management repositories. Unable to read consumer identity This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Last metadata expiration check: 0:02:58 ago on Fri 26 Jul 2019 02:11:48 PM UTC. Error: Problem: package docker-ce-3:19.03.1-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed - cannot install the best candidate for the job - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded - package containerd.io-1.2.2-3.el7.x86_64 is excluded - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) I create a redhat account, but I have got this problem : # subscription-manager register --force Registering to: subscription.rhsm.redhat.com:443/subscription Username: xxxxxxxxxxx Password: The system has been registered with ID: 6c07b574-2601-4a84-90d4-a9dfdc499c2f The registered system name is: ip-172-31-11-95.us-east-2.compute.internal Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 566, in load ret = self._repo.load() File "/usr/lib64/python3.6/site-packages/libdnf/repo.py", line 503, in load return _repo.Repo_load(self) RuntimeError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 173, in update_check self._sync_with_server(uep, consumer_uuid) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 477, in _sync_with_server combined_profile = self.current_profile File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 430, in current_profile module_profile = get_profile('modulemd').collect() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 347, in get_profile profile = PROFILE_MAP[profile_type]() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 54, in __init__ self.content = self.__generate() File "/usr/lib64/python3.6/site-packages/rhsm/profile.py", line 76, in __generate base.fill_sack() File "/usr/lib/python3.6/site-packages/dnf/base.py", line 400, in fill_sack self._add_repo_to_sack(r) File "/usr/lib/python3.6/site-packages/dnf/base.py", line 135, in _add_repo_to_sack repo.load() File "/usr/lib/python3.6/site-packages/dnf/repo.py", line 568, in load raise dnf.exceptions.RepoError(str(e)) dnf.exceptions.RepoError: Failed to synchronize cache for repo 'rhui-client-config-server-8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/sbin/subscription-manager", line 11, in <module> load_entry_point('subscription-manager==1.23.8', 'console_scripts', 'subscription-manager')() File "/usr/lib64/python3.6/site-packages/subscription_manager/scripts/subscription_manager.py", line 85, in main return managercli.ManagerCLI().main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 2918, in main ret = CLI.main(self) File "/usr/lib64/python3.6/site-packages/subscription_manager/cli.py", line 183, in main return cmd.main() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 506, in main return_code = self._do_command() File "/usr/lib64/python3.6/site-packages/subscription_manager/managercli.py", line 1368, in _do_command profile_mgr.update_check(self.cp, consumer['uuid'], True) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 457, in update_check return CacheManager.update_check(self, uep, consumer_uuid, force) File "/usr/lib64/python3.6/site-packages/subscription_manager/cache.py", line 183, in update_check raise Exception(_("Error updating system data on the server, see /var/log/rhsm/rhsm.log " Exception: Error updating system data on the server, see /var/log/rhsm/rhsm.log for more details.
docker, redhat
8
15,757
4
https://stackoverflow.com/questions/57221919/install-docker-ce-on-redhat-8
15,141,475
Reloading postgreSQL without breaking current connection?
I was wondering if i issued a reload command to Postgres so that it could reread the pg_hba.conf file (Made some changes in here and need them to take immediate effect on a live system) will destroy or drop and current connections? /etc/init.d/postgreSQL83 reload
Reloading postgreSQL without breaking current connection? I was wondering if i issued a reload command to Postgres so that it could reread the pg_hba.conf file (Made some changes in here and need them to take immediate effect on a live system) will destroy or drop and current connections? /etc/init.d/postgreSQL83 reload
postgresql, command-line, redhat
8
6,614
1
https://stackoverflow.com/questions/15141475/reloading-postgresql-without-breaking-current-connection
116,640
Low Java single process thread limit in Red Hat Linux
I'm experiencing an issue on a test machine running Red Hat Linux (kernel version is 2.4.21-37.ELsmp) using Java 1.6 (1.6.0_02 or 1.6.0_04). The problem is, once a certain number of threads are created in a single thread group, the operating system is unwilling or unable to create any more. This seems to be specific to Java creating threads, as the C thread-limit program was able to create about 1.5k threads. Additionally, this doesn't happen with a Java 1.4 JVM... it can create over 1.4k threads, though they are obviously being handled differently with respect to the OS. In this case, the number of threads it's cutting off at is a mere 29 threads. This is testable with a simple Java program that just creates threads until it gets an error and then prints the number of threads it created. The error is a java.lang.OutOfMemoryError: unable to create new native thread This seems to be unaffected by things such as the number of threads in use by other processes or users or the total amount of memory the system is using at the time. JVM settings like Xms, Xmx, and Xss don't seem to change anything either (which is expected, considering the issue seems to be with native OS thread creation). The output of "ulimit -a" is as follows: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) 4 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 7168 virtual memory (kbytes, -v) unlimited The user process limit does not seem to be the issue. Searching for information on what could be wrong has not turned up much, but this post seems to indicate that at least some Red Hat kernels limit a process to 300 MB of memory allocated for stack, and at 10 MB per thread for stack, it seems like the issue could be there (though it seems strange and unlikely as well). I've tried changing the stack size with "ulimit -s" to test this, but any value other than 10240 and the JVM does not start with an error of: Error occurred during initialization of VM Cannot create VM thread. Out of system resources. I can generally get around Linux, but I really don't know much about system configuration, and I haven't been able to find anything specifically addressing this kind of situation. Any ideas on what system or JVM settings could be causing this would be appreciated. Edits : Running the thread-limit program mentioned by plinth , there was no failure until it tried to create the 1529th thread. The issue also did not occur using a 1.4 JVM (does occur with 1.6.0_02 and 1.6.0_04 JVMs, can't test with a 1.5 JVM at the moment). The code for the thread test I'm using is as follows: public class ThreadTest { public static void main(String[] pArgs) throws Exception { try { // keep spawning new threads forever while (true) { new TestThread().start(); } } // when out of memory error is reached, print out the number of // successful threads spawned and exit catch ( OutOfMemoryError e ) { System.out.println(TestThread.CREATE_COUNT); System.exit(-1); } } static class TestThread extends Thread { private static int CREATE_COUNT = 0; public TestThread() { CREATE_COUNT++; } // make the thread wait for eternity after being spawned public void run() { try { sleep(Integer.MAX_VALUE); } // even if there is an interruption, dont do anything catch (InterruptedException e) { } } } } If you run this with a 1.4 JVM it will hang when it can't create any more threads and require a kill -9 (at least it did for me). More Edit: It turns out that the system that is having the problem is using the LinuxThreads threading model while another system that works fine is using the NPTL model.
Low Java single process thread limit in Red Hat Linux I'm experiencing an issue on a test machine running Red Hat Linux (kernel version is 2.4.21-37.ELsmp) using Java 1.6 (1.6.0_02 or 1.6.0_04). The problem is, once a certain number of threads are created in a single thread group, the operating system is unwilling or unable to create any more. This seems to be specific to Java creating threads, as the C thread-limit program was able to create about 1.5k threads. Additionally, this doesn't happen with a Java 1.4 JVM... it can create over 1.4k threads, though they are obviously being handled differently with respect to the OS. In this case, the number of threads it's cutting off at is a mere 29 threads. This is testable with a simple Java program that just creates threads until it gets an error and then prints the number of threads it created. The error is a java.lang.OutOfMemoryError: unable to create new native thread This seems to be unaffected by things such as the number of threads in use by other processes or users or the total amount of memory the system is using at the time. JVM settings like Xms, Xmx, and Xss don't seem to change anything either (which is expected, considering the issue seems to be with native OS thread creation). The output of "ulimit -a" is as follows: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) 4 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 7168 virtual memory (kbytes, -v) unlimited The user process limit does not seem to be the issue. Searching for information on what could be wrong has not turned up much, but this post seems to indicate that at least some Red Hat kernels limit a process to 300 MB of memory allocated for stack, and at 10 MB per thread for stack, it seems like the issue could be there (though it seems strange and unlikely as well). I've tried changing the stack size with "ulimit -s" to test this, but any value other than 10240 and the JVM does not start with an error of: Error occurred during initialization of VM Cannot create VM thread. Out of system resources. I can generally get around Linux, but I really don't know much about system configuration, and I haven't been able to find anything specifically addressing this kind of situation. Any ideas on what system or JVM settings could be causing this would be appreciated. Edits : Running the thread-limit program mentioned by plinth , there was no failure until it tried to create the 1529th thread. The issue also did not occur using a 1.4 JVM (does occur with 1.6.0_02 and 1.6.0_04 JVMs, can't test with a 1.5 JVM at the moment). The code for the thread test I'm using is as follows: public class ThreadTest { public static void main(String[] pArgs) throws Exception { try { // keep spawning new threads forever while (true) { new TestThread().start(); } } // when out of memory error is reached, print out the number of // successful threads spawned and exit catch ( OutOfMemoryError e ) { System.out.println(TestThread.CREATE_COUNT); System.exit(-1); } } static class TestThread extends Thread { private static int CREATE_COUNT = 0; public TestThread() { CREATE_COUNT++; } // make the thread wait for eternity after being spawned public void run() { try { sleep(Integer.MAX_VALUE); } // even if there is an interruption, dont do anything catch (InterruptedException e) { } } } } If you run this with a 1.4 JVM it will hang when it can't create any more threads and require a kill -9 (at least it did for me). More Edit: It turns out that the system that is having the problem is using the LinuxThreads threading model while another system that works fine is using the NPTL model.
java, linux, redhat
8
18,831
5
https://stackoverflow.com/questions/116640/low-java-single-process-thread-limit-in-red-hat-linux
70,205,661
correctly specifying Device Name for EBS volume while attaching to an ec2 instance and identifying it later using Device name
I am trying to attach an EBS volume on EC2 (RHEL) instance. This is how my attach-volume command looks like: aws ec2 attach-volume --volume-id vol-xxxxxxxxxxxxxxxxx --instance-id i-yyyyyyyyyyyyyyyyy --device /dev/sdf { "AttachTime": "2021-12-02T19:30:13.070000+00:00", "Device": "/dev/sdf", "InstanceId": "i-yyyyyyyyyyyyyyyyy ", "State": "attaching", "VolumeId": "vol-xxxxxxxxxxxxxxxxx " } this is the output of lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme1n1 259:0 0 5G 0 disk └─aaaaa-aaa 253:2 0 5G 0 lvm /logs nvme0n1 259:1 0 10G 0 disk ├─nvme0n1p1 259:2 0 1M 0 part └─nvme0n1p2 259:3 0 10G 0 part / nvme3n1 259:4 0 35G 0 disk ├─bbbbb-bbb 253:3 0 8G 0 lvm [SWAP] ├─bbbbb-ccc 253:4 0 4G 0 lvm /var/tmp ├─bbbbb-ddd 253:5 0 4G 0 lvm /var ├─bbbbb-eee 253:6 0 4G 0 lvm /var/log nvme2n1 259:5 0 5G 0 disk └─ccccc-ffff 253:0 0 5G 0 lvm /products nvme4n1 259:6 0 5G 0 disk └─ddddd-gggg 253:1 0 5G 0 lvm /apps nvme5n1 259:7 0 20G 0 disk Even though I specified device name as /dev/sdf , it shows up as nvme5n1 . This makes it difficult for me to identify the newly attached EBS volume and mount it. I tried aws ec2 attach-volume --volume-id vol-xxxxxxxxxxxxxxxxx --instance-id i-yyyyyyyyyyyyyyyyy --device /dev/nvme5n1 but that gives me an error saying /dev/nvme5n1 is not a valid EBS device name. Is there a way I can identify the right name of the EBS volume I just attached so that I can mount it to the directory I desire?
correctly specifying Device Name for EBS volume while attaching to an ec2 instance and identifying it later using Device name I am trying to attach an EBS volume on EC2 (RHEL) instance. This is how my attach-volume command looks like: aws ec2 attach-volume --volume-id vol-xxxxxxxxxxxxxxxxx --instance-id i-yyyyyyyyyyyyyyyyy --device /dev/sdf { "AttachTime": "2021-12-02T19:30:13.070000+00:00", "Device": "/dev/sdf", "InstanceId": "i-yyyyyyyyyyyyyyyyy ", "State": "attaching", "VolumeId": "vol-xxxxxxxxxxxxxxxxx " } this is the output of lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme1n1 259:0 0 5G 0 disk └─aaaaa-aaa 253:2 0 5G 0 lvm /logs nvme0n1 259:1 0 10G 0 disk ├─nvme0n1p1 259:2 0 1M 0 part └─nvme0n1p2 259:3 0 10G 0 part / nvme3n1 259:4 0 35G 0 disk ├─bbbbb-bbb 253:3 0 8G 0 lvm [SWAP] ├─bbbbb-ccc 253:4 0 4G 0 lvm /var/tmp ├─bbbbb-ddd 253:5 0 4G 0 lvm /var ├─bbbbb-eee 253:6 0 4G 0 lvm /var/log nvme2n1 259:5 0 5G 0 disk └─ccccc-ffff 253:0 0 5G 0 lvm /products nvme4n1 259:6 0 5G 0 disk └─ddddd-gggg 253:1 0 5G 0 lvm /apps nvme5n1 259:7 0 20G 0 disk Even though I specified device name as /dev/sdf , it shows up as nvme5n1 . This makes it difficult for me to identify the newly attached EBS volume and mount it. I tried aws ec2 attach-volume --volume-id vol-xxxxxxxxxxxxxxxxx --instance-id i-yyyyyyyyyyyyyyyyy --device /dev/nvme5n1 but that gives me an error saying /dev/nvme5n1 is not a valid EBS device name. Is there a way I can identify the right name of the EBS volume I just attached so that I can mount it to the directory I desire?
amazon-web-services, amazon-ec2, redhat, rhel, amazon-ebs
8
7,693
2
https://stackoverflow.com/questions/70205661/correctly-specifying-device-name-for-ebs-volume-while-attaching-to-an-ec2-instan
60,490,193
detect key press in python, where each iteration can take more than a couple of seconds?
Edit: The below answer to use keyboard.on_press(callback, suppress=False) works fine in ubuntu without any issues. But in Redhat/Amazon linux , it fails to work. I have used the code snippet from this thread import keyboard # using module keyboard while True: # making a loop try: # used try so that if user pressed other than the given key error will not be shown if keyboard.is_pressed('q'): # if key 'q' is pressed print('You Pressed A Key!') break # finishing the loop except: break # if user pressed a key other than the given key the loop will break But the above code requires the each iteration to be executed in nano-seconds. It fails in the below case: import keyboard # using module keyboard import time while True: # making a loop try: # used try so that if user pressed other than the given key error will not be shown print("sleeping") time.sleep(5) print("slept") if keyboard.is_pressed('q'): # if key 'q' is pressed print('You Pressed A Key!') break # finishing the loop except: print("#######") break # if user pressed a key other than the given key the loop will break
detect key press in python, where each iteration can take more than a couple of seconds? Edit: The below answer to use keyboard.on_press(callback, suppress=False) works fine in ubuntu without any issues. But in Redhat/Amazon linux , it fails to work. I have used the code snippet from this thread import keyboard # using module keyboard while True: # making a loop try: # used try so that if user pressed other than the given key error will not be shown if keyboard.is_pressed('q'): # if key 'q' is pressed print('You Pressed A Key!') break # finishing the loop except: break # if user pressed a key other than the given key the loop will break But the above code requires the each iteration to be executed in nano-seconds. It fails in the below case: import keyboard # using module keyboard import time while True: # making a loop try: # used try so that if user pressed other than the given key error will not be shown print("sleeping") time.sleep(5) print("slept") if keyboard.is_pressed('q'): # if key 'q' is pressed print('You Pressed A Key!') break # finishing the loop except: print("#######") break # if user pressed a key other than the given key the loop will break
python, python-3.x, redhat, keypress, amazon-linux
8
14,900
4
https://stackoverflow.com/questions/60490193/detect-key-press-in-python-where-each-iteration-can-take-more-than-a-couple-of
58,725,457
CA Cert are only added at ca-bundle-trust.crt
Env: Red Hat Enterprise Linux Server release 7.7 (Maipo) # openssl version OpenSSL 1.0.2g 1 Mar 2016 so a self-sign cert is generated using OpenSSL and the cacert.pem is put under /etc/pki/ca-trust/source/anchors/ . Now according to the man from update-ca-trust , the cmd should be run to add the cert into the trust store and the cert are to be added under /etc/pki/ca-trust/extracted/ . After running the said cmd, I see that the cert is added only to /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt . But most of the application like curl refer the OS ca trust at /etc/pki/ca-trust/extracted/openssl/ca-bundle.crt which is link to /etc/pki/tls/certs/ca-bundle.crt . curl -v [URL] * About to connect() to 172.21.19.92 port 443 (#0) * Trying 172.21.19.92... * Connected to 172.21.19.92 (172.21.19.92) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt I understand that passing --cacert option would be a way to overcome it but I want to know why update-ca-trust only update ca-bundle-trust.crt and not ca-bundle.crt or the java Keystore extracted one as well /etc/pki/ca-trust/extracted/java/cacerts
CA Cert are only added at ca-bundle-trust.crt Env: Red Hat Enterprise Linux Server release 7.7 (Maipo) # openssl version OpenSSL 1.0.2g 1 Mar 2016 so a self-sign cert is generated using OpenSSL and the cacert.pem is put under /etc/pki/ca-trust/source/anchors/ . Now according to the man from update-ca-trust , the cmd should be run to add the cert into the trust store and the cert are to be added under /etc/pki/ca-trust/extracted/ . After running the said cmd, I see that the cert is added only to /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt . But most of the application like curl refer the OS ca trust at /etc/pki/ca-trust/extracted/openssl/ca-bundle.crt which is link to /etc/pki/tls/certs/ca-bundle.crt . curl -v [URL] * About to connect() to 172.21.19.92 port 443 (#0) * Trying 172.21.19.92... * Connected to 172.21.19.92 (172.21.19.92) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt I understand that passing --cacert option would be a way to overcome it but I want to know why update-ca-trust only update ca-bundle-trust.crt and not ca-bundle.crt or the java Keystore extracted one as well /etc/pki/ca-trust/extracted/java/cacerts
ssl, curl, openssl, redhat, ca
8
3,723
1
https://stackoverflow.com/questions/58725457/ca-cert-are-only-added-at-ca-bundle-trust-crt
30,201,276
How to tell Docker to use dm/LVM backend for volumes instead of vfs
I recently heard (from a RedHat guy) that "direct-LVM"(devicemapper) is the recommended storage-backend for production setups, so I wanted to try that out on a CentOS 7 VM. (where loopback-LVM seems to be the default). So I created a separate data-disk and VG with 2 LVs for data and metadata, passed them into the docker config and started up docker ... so far so good, looks like this: # ps auxwf ... /usr/bin/docker -d --selinux-enabled -H unix://var/run/docker.sock \ --log-level=warn --storage-opt dm.fs=xfs \ --storage-opt dm.datadev=/dev/vg_data/docker-data \ --storage-opt dm.metadatadev=/dev/vg_data/docker-meta \ --storage-opt dm.basesize=30G --bip=172.17.42.1/24 \ # docker info Containers: 8 Images: 145 Storage Driver: devicemapper Pool Name: docker-253:0-34485692-pool Pool Blocksize: 65.54 kB Backing Filesystem: xfs Data file: /dev/vg_data/docker-data Metadata file: /dev/vg_data/docker-meta Data Space Used: 4.498 GB Data Space Total: 34.36 GB Data Space Available: 29.86 GB Metadata Space Used: 6.402 MB Metadata Space Total: 104.9 MB Metadata Space Available: 98.46 MB ... ... But today when I started a container that generates pretty much of local data (since I don't need to persist that for testing, I haven't mapped that Volumes anywhere on startup) , I noticed, that Volume-Data is all put into /var/lib/docker/vfs directories instead of the LVM thinpool as I expected. This is in fact filling up my root-fs that I kept small by intent. This is the Disk-Layout as seen by the Docker-Host: # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10G 0 disk +-sda1 8:1 0 500M 0 part /boot +-sda2 8:2 0 9.5G 0 part +-centos_system-root 253:0 0 8.5G 0 lvm / +-centos_system-swap 253:1 0 1G 0 lvm [SWAP] sdb 8:16 0 50G 0 disk +-vg_data-docker--data 253:2 0 32G 0 lvm | +-docker-253:0-34485692-pool 253:4 0 32G 0 dm | +-docker-253:0-34485692-... 253:5 0 30G 0 dm | +-docker-253:0-34485692-... 253:6 0 30G 0 dm +-vg_data-docker--meta 253:3 0 100M 0 lvm +-docker-253:0-34485692-pool 253:4 0 32G 0 dm +-docker-253:0-34485692-... 253:5 0 30G 0 dm +-docker-253:0-34485692-... 253:6 0 30G 0 dm How can I get Docker to put Volumes (either implicitly created or explicit in Data-containers) to be put onto the configured storage-backend ? Or will this really only be used for (base-)images and my expectations have been totally wrong ?
How to tell Docker to use dm/LVM backend for volumes instead of vfs I recently heard (from a RedHat guy) that "direct-LVM"(devicemapper) is the recommended storage-backend for production setups, so I wanted to try that out on a CentOS 7 VM. (where loopback-LVM seems to be the default). So I created a separate data-disk and VG with 2 LVs for data and metadata, passed them into the docker config and started up docker ... so far so good, looks like this: # ps auxwf ... /usr/bin/docker -d --selinux-enabled -H unix://var/run/docker.sock \ --log-level=warn --storage-opt dm.fs=xfs \ --storage-opt dm.datadev=/dev/vg_data/docker-data \ --storage-opt dm.metadatadev=/dev/vg_data/docker-meta \ --storage-opt dm.basesize=30G --bip=172.17.42.1/24 \ # docker info Containers: 8 Images: 145 Storage Driver: devicemapper Pool Name: docker-253:0-34485692-pool Pool Blocksize: 65.54 kB Backing Filesystem: xfs Data file: /dev/vg_data/docker-data Metadata file: /dev/vg_data/docker-meta Data Space Used: 4.498 GB Data Space Total: 34.36 GB Data Space Available: 29.86 GB Metadata Space Used: 6.402 MB Metadata Space Total: 104.9 MB Metadata Space Available: 98.46 MB ... ... But today when I started a container that generates pretty much of local data (since I don't need to persist that for testing, I haven't mapped that Volumes anywhere on startup) , I noticed, that Volume-Data is all put into /var/lib/docker/vfs directories instead of the LVM thinpool as I expected. This is in fact filling up my root-fs that I kept small by intent. This is the Disk-Layout as seen by the Docker-Host: # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10G 0 disk +-sda1 8:1 0 500M 0 part /boot +-sda2 8:2 0 9.5G 0 part +-centos_system-root 253:0 0 8.5G 0 lvm / +-centos_system-swap 253:1 0 1G 0 lvm [SWAP] sdb 8:16 0 50G 0 disk +-vg_data-docker--data 253:2 0 32G 0 lvm | +-docker-253:0-34485692-pool 253:4 0 32G 0 dm | +-docker-253:0-34485692-... 253:5 0 30G 0 dm | +-docker-253:0-34485692-... 253:6 0 30G 0 dm +-vg_data-docker--meta 253:3 0 100M 0 lvm +-docker-253:0-34485692-pool 253:4 0 32G 0 dm +-docker-253:0-34485692-... 253:5 0 30G 0 dm +-docker-253:0-34485692-... 253:6 0 30G 0 dm How can I get Docker to put Volumes (either implicitly created or explicit in Data-containers) to be put onto the configured storage-backend ? Or will this really only be used for (base-)images and my expectations have been totally wrong ?
centos, docker, redhat, lvm, device-mapper
8
5,274
1
https://stackoverflow.com/questions/30201276/how-to-tell-docker-to-use-dm-lvm-backend-for-volumes-instead-of-vfs
67,057,020
Retrieve secret value from openshift
I created a key/value secret in openshift. I want to retrieve the value of that key/value pair. i tried using oc describe secret ashish -n my-project but it gave the value as shown below but i dont the value for my key it just shows 7bytes. Name: ashish Namespace: my-project Labels: <none> Annotations: <none> Type: Opaque Data ==== ashish: 7 bytes
Retrieve secret value from openshift I created a key/value secret in openshift. I want to retrieve the value of that key/value pair. i tried using oc describe secret ashish -n my-project but it gave the value as shown below but i dont the value for my key it just shows 7bytes. Name: ashish Namespace: my-project Labels: <none> Annotations: <none> Type: Opaque Data ==== ashish: 7 bytes
kubernetes, openshift, redhat
7
34,240
3
https://stackoverflow.com/questions/67057020/retrieve-secret-value-from-openshift
35,078,239
Show DU outcome in purely megabytes
I am using DU function to output the directory size to a file and then move it to an excel file to add the total. Is it possible to output the size of a directory only in MB (even if the size is in KB or GB): e.g. if the file size is 50kb the output would show 0.048MB I'm aware of du -h however I haven't been able to maintain the size in MBs if the size is larger than 1024, since it's switches to 1G. The du -m however does not show the M (for megabytes) next to the value so isn't really human friendly. Thanks in advance, J
Show DU outcome in purely megabytes I am using DU function to output the directory size to a file and then move it to an excel file to add the total. Is it possible to output the size of a directory only in MB (even if the size is in KB or GB): e.g. if the file size is 50kb the output would show 0.048MB I'm aware of du -h however I haven't been able to maintain the size in MBs if the size is larger than 1024, since it's switches to 1G. The du -m however does not show the M (for megabytes) next to the value so isn't really human friendly. Thanks in advance, J
linux, bash, redhat, du
7
12,189
1
https://stackoverflow.com/questions/35078239/show-du-outcome-in-purely-megabytes
21,257,452
Which yum group(s) contain a given package?
Is there a way to ask yum which group(s) contain a given package? I know how to ask what packages are in a given group, and could write a quick script to trawl over all of the groups, but it would be nice to have a simpler mechanism than that.
Which yum group(s) contain a given package? Is there a way to ask yum which group(s) contain a given package? I know how to ask what packages are in a given group, and could write a quick script to trawl over all of the groups, but it would be nice to have a simpler mechanism than that.
redhat, rpm, yum
7
9,130
3
https://stackoverflow.com/questions/21257452/which-yum-groups-contain-a-given-package
42,024,551
MySql 1045 When using --defaults-file
I am having a strange issue. I have MySql running on RHEL. I can logon to MySql with mysql -uroot -pmyPassword and it works fine. Also, when I try to execute a query from a .sh script as below it works fine mysql --user=root --password=myPassword --host=localhost --port=3306 -se "SELECT 1 as testConnect" 2>&1>> $OUTPUT But when I store the userid and password in a msql.conf file as below [clientroot] user=root password=myPassword and then change the line in the script as below mysql --defaults-file=msql.conf --defaults-group-suffix=root -hlocalhost -P3306 -se "SELECT 1 as testConnect" 2>&1>> $OUTPUT When I run it, I get the error: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) I am running the script with sudo and the config file is at the same directory as the script I have permission 0600 on the config file. How do I make this work?
MySql 1045 When using --defaults-file I am having a strange issue. I have MySql running on RHEL. I can logon to MySql with mysql -uroot -pmyPassword and it works fine. Also, when I try to execute a query from a .sh script as below it works fine mysql --user=root --password=myPassword --host=localhost --port=3306 -se "SELECT 1 as testConnect" 2>&1>> $OUTPUT But when I store the userid and password in a msql.conf file as below [clientroot] user=root password=myPassword and then change the line in the script as below mysql --defaults-file=msql.conf --defaults-group-suffix=root -hlocalhost -P3306 -se "SELECT 1 as testConnect" 2>&1>> $OUTPUT When I run it, I get the error: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) I am running the script with sudo and the config file is at the same directory as the script I have permission 0600 on the config file. How do I make this work?
mysql, redhat, credentials
7
3,353
3
https://stackoverflow.com/questions/42024551/mysql-1045-when-using-defaults-file
31,233,316
How to get the file creation date/time in xfs
I am able to get the file creation date/time using debugfs command in ext file system but how to check/get the same in XFS file system.
How to get the file creation date/time in xfs I am able to get the file creation date/time using debugfs command in ext file system but how to check/get the same in XFS file system.
linux, bash, redhat, rhel7
7
9,716
1
https://stackoverflow.com/questions/31233316/how-to-get-the-file-creation-date-time-in-xfs
57,635,036
List pods that are servicing a service
I am trying to get the list of pods that are servicing a particular service There are 3 pods associated with my service. I tried to execute the below command oc describe svc my-svc-1 I am expecting to see the pods associated with this service. but that does not show up. What command gets me just the list of pods associated with the service.
List pods that are servicing a service I am trying to get the list of pods that are servicing a particular service There are 3 pods associated with my service. I tried to execute the below command oc describe svc my-svc-1 I am expecting to see the pods associated with this service. but that does not show up. What command gets me just the list of pods associated with the service.
kubernetes, openshift, redhat, kubernetes-helm, openshift-client-tools
7
3,465
1
https://stackoverflow.com/questions/57635036/list-pods-that-are-servicing-a-service
29,954,213
How do I kill a Python multiprocessing job?
In reference to this question I had asked, I can successfully run jobs using multiprocessing and I can see that all processors are being utilized. How do I kill this job? From terminal I run: python my_multiprocessor_script.py Then I hit Ctrl+C to kill. However the job doesn't seem to be killed and I can see all the processors still in use. I'm running Red Hat Enterprise Linux Server release 6.6.
How do I kill a Python multiprocessing job? In reference to this question I had asked, I can successfully run jobs using multiprocessing and I can see that all processors are being utilized. How do I kill this job? From terminal I run: python my_multiprocessor_script.py Then I hit Ctrl+C to kill. However the job doesn't seem to be killed and I can see all the processors still in use. I'm running Red Hat Enterprise Linux Server release 6.6.
python, linux, multiprocessing, redhat, python-multiprocessing
7
3,225
1
https://stackoverflow.com/questions/29954213/how-do-i-kill-a-python-multiprocessing-job
21,217,441
Red Hat linux - &quot;turn off&quot; encryption checking
I have a Red Hat 6.5 Linux implementation that uses LUKS to encrypt the system and - for reasons that aren't relevant - I would like to "turn off" boot encryption checking for a period of time. It will be turned on again at some point so even if it is possible to remove the LUKS encryption entirely, that is not a solution I am interested in. What I want is to auto-provide the LUKS password on boot so that it doesn't need to be entered manually - thus logically "turning off" encryption even though still actually enabled. Now, while this is straightforward for secondary devices ie. by creating a key file, applying the key file to the encrypted devices and amending /etc/crypttab to reference the key file, one still has to enter at least one password on boot - because, if the primary device is LUKS encrypted, then it first has to be decrypted before /etc/crypttab is accessible. There is a way I have seen of removing the requirement to enter the initial password which is: create a key file apply the key file to the encrypted device ie. enabling the key for the device to be decrypted Copy the key file to a removable not-encrypted device (eg. a flash drive) append rd.luks.key= absolute path to key file : removable not-encrypted device to the booting kernel line in /boot/grub/grub.conf On boot, make sure the removable not-encrypted device is inserted and can be referenced by the boot process. This all looks good, except that I don't want a removable not-encrypted device involved. I simply want the server to boot as though it wasn't encrypted. The only way I can see to achieve this is to replace removable not-encrypted device with normal not-encrypted device . In which case the boot process would read normal not-encrypted device , get the key and use it to decrypt the encrypted devices ...hey presto encryption is disabled. The only device I can find on my system that fulfills the normal not-encrypted device criteria is /dev/sda1 ie. /boot , so I performed the above steps with step 3 and 4 as follows: as above as above copy key file to /boot/keyfile.key append rd.luks.key=/boot/keyfile.key:/dev/sda1 n/a Unfortunately I can't seem to get this to work. Red Hat boots and I don't get asked for a password (as expected), however towards the end of the boot process, it fails with "Kernel panic - not syncing: Attempted to kill init! ..." This behaviour is identical whichever of the following I use: rd.luks.key=/boot/keyfile.key:/dev/sda1 rd.luks.key=/keyfile.key:/dev/sda1 rd.luks.key=/keyfile.key rd.luks.key=/ someKeyFileThatIknowDoesNotExist.key :/dev/sda1 So my questions are as follows: Is what I am trying to do possible If yes, then... where should I be putting the key file what is the rd.luks.key value I should use to reference the key file thanks in advance for any help
Red Hat linux - &quot;turn off&quot; encryption checking I have a Red Hat 6.5 Linux implementation that uses LUKS to encrypt the system and - for reasons that aren't relevant - I would like to "turn off" boot encryption checking for a period of time. It will be turned on again at some point so even if it is possible to remove the LUKS encryption entirely, that is not a solution I am interested in. What I want is to auto-provide the LUKS password on boot so that it doesn't need to be entered manually - thus logically "turning off" encryption even though still actually enabled. Now, while this is straightforward for secondary devices ie. by creating a key file, applying the key file to the encrypted devices and amending /etc/crypttab to reference the key file, one still has to enter at least one password on boot - because, if the primary device is LUKS encrypted, then it first has to be decrypted before /etc/crypttab is accessible. There is a way I have seen of removing the requirement to enter the initial password which is: create a key file apply the key file to the encrypted device ie. enabling the key for the device to be decrypted Copy the key file to a removable not-encrypted device (eg. a flash drive) append rd.luks.key= absolute path to key file : removable not-encrypted device to the booting kernel line in /boot/grub/grub.conf On boot, make sure the removable not-encrypted device is inserted and can be referenced by the boot process. This all looks good, except that I don't want a removable not-encrypted device involved. I simply want the server to boot as though it wasn't encrypted. The only way I can see to achieve this is to replace removable not-encrypted device with normal not-encrypted device . In which case the boot process would read normal not-encrypted device , get the key and use it to decrypt the encrypted devices ...hey presto encryption is disabled. The only device I can find on my system that fulfills the normal not-encrypted device criteria is /dev/sda1 ie. /boot , so I performed the above steps with step 3 and 4 as follows: as above as above copy key file to /boot/keyfile.key append rd.luks.key=/boot/keyfile.key:/dev/sda1 n/a Unfortunately I can't seem to get this to work. Red Hat boots and I don't get asked for a password (as expected), however towards the end of the boot process, it fails with "Kernel panic - not syncing: Attempted to kill init! ..." This behaviour is identical whichever of the following I use: rd.luks.key=/boot/keyfile.key:/dev/sda1 rd.luks.key=/keyfile.key:/dev/sda1 rd.luks.key=/keyfile.key rd.luks.key=/ someKeyFileThatIknowDoesNotExist.key :/dev/sda1 So my questions are as follows: Is what I am trying to do possible If yes, then... where should I be putting the key file what is the rd.luks.key value I should use to reference the key file thanks in advance for any help
linux, encryption, redhat
7
5,418
2
https://stackoverflow.com/questions/21217441/red-hat-linux-turn-off-encryption-checking
15,715,602
How to install yum in Red Hat 3.4.6-3
I want to use the yum command in Red Hat 3.4.6-3. How can I install it?
How to install yum in Red Hat 3.4.6-3 I want to use the yum command in Red Hat 3.4.6-3. How can I install it?
linux, linux-kernel, redhat
7
75,070
2
https://stackoverflow.com/questions/15715602/how-to-install-yum-in-red-hat-3-4-6-3
28,921,697
My python installation is broken/corrupted. How do I fix it?
I followed these instructions on my RedHat Linux version 7 server (which originally just had Python 2.6.x installed): beginning of instructions install build tools sudo yum install make automake gcc gcc-c++ kernel-devel git-core -y install python 2.7 and change default python symlink sudo yum install python27-devel -y sudo rm /usr/bin/python sudo ln -s /usr/bin/python2.7 /usr/bin/python yum still needs 2.6, so write it in and backup script sudo cp /usr/bin/yum /usr/bin/_yum_before_27 sudo sed -i s/python/python2.6/g /usr/bin/yum sudo sed -i s/python2.6/python2.6/g /usr/bin/yum should display now 2.7.5 or later: python -V end of instructions The above commands and comments were taken from: [URL] The python -v command returned this: -bash: python: command not found Now it is as if I have no Python installed. I don't want yum to break. I tried installing Python 3.4. whereis python shows this: python: /usr/bin/python2.6 /usr/bin/python2.6-config /usr/bin/python /usr/lib/python2.6 /usr/lib64/python2.6 /usr/local/bin/python2.7 /usr/local/bin/python3.4m-config /usr/local/bin/python2.7-config /usr/local/bin/python3.4 /usr/local/bin/python3.4m /usr/local/lib/python2.7 /usr/local/lib/python3.4 /usr/include/python2.6 /usr/share/man/man1/python.1.gz What should I do now? I want a working installation of Python. For certain things I'm doing, I need it to be 2.7 or higher. I want yum to still work.
My python installation is broken/corrupted. How do I fix it? I followed these instructions on my RedHat Linux version 7 server (which originally just had Python 2.6.x installed): beginning of instructions install build tools sudo yum install make automake gcc gcc-c++ kernel-devel git-core -y install python 2.7 and change default python symlink sudo yum install python27-devel -y sudo rm /usr/bin/python sudo ln -s /usr/bin/python2.7 /usr/bin/python yum still needs 2.6, so write it in and backup script sudo cp /usr/bin/yum /usr/bin/_yum_before_27 sudo sed -i s/python/python2.6/g /usr/bin/yum sudo sed -i s/python2.6/python2.6/g /usr/bin/yum should display now 2.7.5 or later: python -V end of instructions The above commands and comments were taken from: [URL] The python -v command returned this: -bash: python: command not found Now it is as if I have no Python installed. I don't want yum to break. I tried installing Python 3.4. whereis python shows this: python: /usr/bin/python2.6 /usr/bin/python2.6-config /usr/bin/python /usr/lib/python2.6 /usr/lib64/python2.6 /usr/local/bin/python2.7 /usr/local/bin/python3.4m-config /usr/local/bin/python2.7-config /usr/local/bin/python3.4 /usr/local/bin/python3.4m /usr/local/lib/python2.7 /usr/local/lib/python3.4 /usr/include/python2.6 /usr/share/man/man1/python.1.gz What should I do now? I want a working installation of Python. For certain things I'm doing, I need it to be 2.7 or higher. I want yum to still work.
python, linux, python-2.7, redhat, yum
7
19,596
4
https://stackoverflow.com/questions/28921697/my-python-installation-is-broken-corrupted-how-do-i-fix-it
59,780,323
How can I instruct yum to install a specifc version of OpenJDK
I trying to install openjdk in the redhat server, how can I install the specified version? The version I want to install is: 11.0.4 The version installed using the following command is 11.0.6 yum install java-11-openjdk-devel
How can I instruct yum to install a specifc version of OpenJDK I trying to install openjdk in the redhat server, how can I install the specified version? The version I want to install is: 11.0.4 The version installed using the following command is 11.0.6 yum install java-11-openjdk-devel
java, redhat
7
11,602
1
https://stackoverflow.com/questions/59780323/how-can-i-instruct-yum-to-install-a-specifc-version-of-openjdk
58,274,780
Maven Central vs Other Repos?
On the maven centrral, I can see several other Repositories available for some of the libraries. For example - Apache Common BeanUtils is available in Central, Redhat GA, JBoss 3rd-party etc. The library name changes as well. For example, Maven CCentral has versions like 1.9.4, however Redhat GA has versions like - 1.9.3.redhat-1. Click on this URI to see the details. [URL] My question is - What is the difference between Repo marked as Central and "Redhat GA"? Attaching an image of :Maven GA; repo as well here.
Maven Central vs Other Repos? On the maven centrral, I can see several other Repositories available for some of the libraries. For example - Apache Common BeanUtils is available in Central, Redhat GA, JBoss 3rd-party etc. The library name changes as well. For example, Maven CCentral has versions like 1.9.4, however Redhat GA has versions like - 1.9.3.redhat-1. Click on this URI to see the details. [URL] My question is - What is the difference between Repo marked as Central and "Redhat GA"? Attaching an image of :Maven GA; repo as well here.
maven, redhat, maven-central
7
2,671
1
https://stackoverflow.com/questions/58274780/maven-central-vs-other-repos
10,296,244
How do I subscribe to supplementary server channel to install sun jdk 6 on RHEL 4.x
I want to install sun jdk 6 on RHEL 4.x using yum install java-1.6.0-sun-devel but found that I have to subscribe to supplementary server channel. How do I do that? Thanks in Advance!
How do I subscribe to supplementary server channel to install sun jdk 6 on RHEL 4.x I want to install sun jdk 6 on RHEL 4.x using yum install java-1.6.0-sun-devel but found that I have to subscribe to supplementary server channel. How do I do that? Thanks in Advance!
linux, installation, java, redhat, sun
7
5,552
1
https://stackoverflow.com/questions/10296244/how-do-i-subscribe-to-supplementary-server-channel-to-install-sun-jdk-6-on-rhel
588,367
Why does PostgreSQL run on Ubuntu after install without using initdb?
I am curious as to why you don't have to use initdb as per the postgresql manual prior to running psql the first time? (I have installed version 8.3 on 8.04.1) Red Hat requires -c postgresql start but not init.db. However, on FreeBSD you have to run initdb. Why isn't setup consistent? Does it come down to a difference between apt-get install, rpm -i and pkg_add ?
Why does PostgreSQL run on Ubuntu after install without using initdb? I am curious as to why you don't have to use initdb as per the postgresql manual prior to running psql the first time? (I have installed version 8.3 on 8.04.1) Red Hat requires -c postgresql start but not init.db. However, on FreeBSD you have to run initdb. Why isn't setup consistent? Does it come down to a difference between apt-get install, rpm -i and pkg_add ?
ubuntu, installation, redhat
7
3,817
5
https://stackoverflow.com/questions/588367/why-does-postgresql-run-on-ubuntu-after-install-without-using-initdb
50,577,152
Replacing OAuth2 Implicit Grant with Authorization Code without Client Secret
OAuth 2.0 Auth Code without Client Secret is being used in lieu of Implicit Grant for client-side JavaScript apps by a few companies. What are the general advantages / tradeoffs of using Auth Code without Client Secret vs. Implicit Grant? Are there more companies and/or standards organizations moving this way? Red Hat, Deutsche Telekom and others have moved this way per this article and the IETF OAuth mailing list posts below. [URL] Implicit was previously recommended for clients without a secret, but has been superseded by using the Authorization Code grant with no secret. ... Previously, it was recommended that browser-based apps use the "Implicit" flow, which returns an access token immediately and does not have a token exchange step. In the time since the spec was originally written, the industry best practice has changed to recommend that the authorization code flow be used without the client secret. This provides more opportunities to create a secure flow, such as using the state parameter. References: Redhat , Deutsche Telekom , Smart Health IT . Here are the messages referenced above. Red Hat For our IDP [1], our javascript library uses the auth code flow, but requires a public client, redirect_uri validation, and also does CORS checks and processing. We did not like Implicit Flow because 1) access tokens would be in the browser history 2) short lived access tokens (seconds or minutes) would require a browser redirect Deutsche Telekom Same for Deutsche Telekom. Our javascript clients also use code flow with CORS processing and of course redirect_uri validation. SMART Health IT We've taken a similar approach for SMART Health IT [1], using the code flow for public clients to support in-browser apps, and <1h token lifetime. (We also allow these public clients to request a limited-duration refresh token by asking for an "online_access" scope; these refresh tokens stop working when the user's session with the AS ends — useful in systems where that session concept is meaningful.)
Replacing OAuth2 Implicit Grant with Authorization Code without Client Secret OAuth 2.0 Auth Code without Client Secret is being used in lieu of Implicit Grant for client-side JavaScript apps by a few companies. What are the general advantages / tradeoffs of using Auth Code without Client Secret vs. Implicit Grant? Are there more companies and/or standards organizations moving this way? Red Hat, Deutsche Telekom and others have moved this way per this article and the IETF OAuth mailing list posts below. [URL] Implicit was previously recommended for clients without a secret, but has been superseded by using the Authorization Code grant with no secret. ... Previously, it was recommended that browser-based apps use the "Implicit" flow, which returns an access token immediately and does not have a token exchange step. In the time since the spec was originally written, the industry best practice has changed to recommend that the authorization code flow be used without the client secret. This provides more opportunities to create a secure flow, such as using the state parameter. References: Redhat , Deutsche Telekom , Smart Health IT . Here are the messages referenced above. Red Hat For our IDP [1], our javascript library uses the auth code flow, but requires a public client, redirect_uri validation, and also does CORS checks and processing. We did not like Implicit Flow because 1) access tokens would be in the browser history 2) short lived access tokens (seconds or minutes) would require a browser redirect Deutsche Telekom Same for Deutsche Telekom. Our javascript clients also use code flow with CORS processing and of course redirect_uri validation. SMART Health IT We've taken a similar approach for SMART Health IT [1], using the code flow for public clients to support in-browser apps, and <1h token lifetime. (We also allow these public clients to request a limited-duration refresh token by asking for an "online_access" scope; these refresh tokens stop working when the user's session with the AS ends — useful in systems where that session concept is meaningful.)
oauth-2.0, redhat
7
2,730
2
https://stackoverflow.com/questions/50577152/replacing-oauth2-implicit-grant-with-authorization-code-without-client-secret
26,541,049
ltrace: Couldn&#39;t find .dynsym or .dynstr in &quot;library.so&quot;
I have tried to use the ltrace. I tried to use the following command to profile the library.so file which is used by a program sampleapp , ltrace -c -T --library=library.so --output=out.txt ./SampleApp . But it shows the above error. But library.so is a debug build. So the symbol table should be there. I have tried to verify it with objdump --source library.so | grep CreateSocket() . It returns codes that uses that CreateSocket() function. Which means it contains a symbol table. Than why that error occurs? Related post: measure CPU usage per second of a dynamically linked library
ltrace: Couldn&#39;t find .dynsym or .dynstr in &quot;library.so&quot; I have tried to use the ltrace. I tried to use the following command to profile the library.so file which is used by a program sampleapp , ltrace -c -T --library=library.so --output=out.txt ./SampleApp . But it shows the above error. But library.so is a debug build. So the symbol table should be there. I have tried to verify it with objdump --source library.so | grep CreateSocket() . It returns codes that uses that CreateSocket() function. Which means it contains a symbol table. Than why that error occurs? Related post: measure CPU usage per second of a dynamically linked library
c++, redhat, profiler, centos5, ltrace
7
7,519
1
https://stackoverflow.com/questions/26541049/ltrace-couldnt-find-dynsym-or-dynstr-in-library-so
16,262,470
Best way to write an init.d script for start_server and starman?
I'm trying to come up with a nice init.d script that starts a psgi app, using start_server and starman . It needs to have the following features: Run on RedHat (i.e. Debian's start-stop-daemon is not available) Run start_server as another user Be maintainable. Ideally, I'd like to use the stuff that comes with /etc/init.d/functions to give the script the look and feel of any ol' RedHat init.d script. More specifically, I'm looking for best practices to: Daemonize a program that doesn't come with its own --daemonize option Run the daemon under another UID.
Best way to write an init.d script for start_server and starman? I'm trying to come up with a nice init.d script that starts a psgi app, using start_server and starman . It needs to have the following features: Run on RedHat (i.e. Debian's start-stop-daemon is not available) Run start_server as another user Be maintainable. Ideally, I'd like to use the stuff that comes with /etc/init.d/functions to give the script the look and feel of any ol' RedHat init.d script. More specifically, I'm looking for best practices to: Daemonize a program that doesn't come with its own --daemonize option Run the daemon under another UID.
perl, shell, redhat, init
7
1,530
3
https://stackoverflow.com/questions/16262470/best-way-to-write-an-init-d-script-for-start-server-and-starman
33,529,850
Ansible: have sudo but no root
I’d like to use Ansible to manage the configuration of a our Hadoop cluster (running Red Hat). I have sudo access and can manually ssh into the nodes to execute commands. However, I’m experiencing problems when I try to run Ansible modules to perform the same tasks. Although I have sudo access, I can’t become root . When I try to execute Ansible scripts that require elevated privileges, I get an error like this: Sorry, user awoolford is not allowed to execute '/bin/bash -c echo BECOME-SUCCESS- […] /usr/bin/python /tmp/ansible-tmp-1446662360.01-231435525506280/copy' as awoolford on [some_hadoop_node]. Looking through the documentation , I thought that the become_allow_same_user property might resolve this, and so I added the following to ansible.cfg : [privilege_escalation] become_allow_same_user=yes Unfortunately, it didn't work. This post suggests that I need permissions to sudo /bin/sh (or some other shell). Unfortunately, that's not possible for security reasons. Here's a snippet from /etc/sudoers : root ALL=(ALL) ALL awoolford ALL=(ALL) ALL, !SU, !SHELLS, !RESTRICT Can Ansible work in an environment like this? If so, what am I doing wrong?
Ansible: have sudo but no root I’d like to use Ansible to manage the configuration of a our Hadoop cluster (running Red Hat). I have sudo access and can manually ssh into the nodes to execute commands. However, I’m experiencing problems when I try to run Ansible modules to perform the same tasks. Although I have sudo access, I can’t become root . When I try to execute Ansible scripts that require elevated privileges, I get an error like this: Sorry, user awoolford is not allowed to execute '/bin/bash -c echo BECOME-SUCCESS- […] /usr/bin/python /tmp/ansible-tmp-1446662360.01-231435525506280/copy' as awoolford on [some_hadoop_node]. Looking through the documentation , I thought that the become_allow_same_user property might resolve this, and so I added the following to ansible.cfg : [privilege_escalation] become_allow_same_user=yes Unfortunately, it didn't work. This post suggests that I need permissions to sudo /bin/sh (or some other shell). Unfortunately, that's not possible for security reasons. Here's a snippet from /etc/sudoers : root ALL=(ALL) ALL awoolford ALL=(ALL) ALL, !SU, !SHELLS, !RESTRICT Can Ansible work in an environment like this? If so, what am I doing wrong?
redhat, ansible
7
4,715
4
https://stackoverflow.com/questions/33529850/ansible-have-sudo-but-no-root
33,151,503
epoll_ctl() failed: No such file or directory [errno = 2]
Recently updated the Linux kernel from 2.6.18 to 2.6.32, and an existing application starts error out with following error message: epoll_ctl() failed: No such file or directory [errno = 2]. I did read through the linux man page on epoll_ctl but couldn't make much sense of it. I am trying to understand what the possible cause of such? Thanks
epoll_ctl() failed: No such file or directory [errno = 2] Recently updated the Linux kernel from 2.6.18 to 2.6.32, and an existing application starts error out with following error message: epoll_ctl() failed: No such file or directory [errno = 2]. I did read through the linux man page on epoll_ctl but couldn't make much sense of it. I am trying to understand what the possible cause of such? Thanks
linux, redhat, epoll
7
4,128
1
https://stackoverflow.com/questions/33151503/epoll-ctl-failed-no-such-file-or-directory-errno-2
25,924,313
What is the official way to install Haskell Platform 2014, from source, on Red Hat?
I am trying to install Haskell Platform 2014.2.0.0 from source on Red Hat Enterprise Linux 6.5. I have a functional install of Haskell Platform 2012.4.0.0 and GHC 7.4.2 from two years ago, plus a recently-installed Haskell Platform 2013.2.0.0 and GHC 7.6.3 from JustHub. I've built GHC 7.8.3 from source, but it keeps coming up with seven failures in the test suite. I have no idea if these test failures are innocuous or not. (The test failures are not relevant to my question, but they may become significant later.) I unpack the source tarball of 2014.2.0.0, read the README. It says that the way to build this iteration of Haskell is with a shell script, which is invoked: ./platform.sh $PATH_TO_GHC_BINDIST_TARBALL I don't have a GHC binary distribution tarball. So far as I am able to tell, there is no binary distribution tarball of GHC 7.8.3 for any version of Red Hat Enterprise Linux. I have a built GHC 7.8.3. How do I tell platform.sh -- or whatever is underneath it -- that there is no tarball, and it should just use what's in $PATH? Alternately, how do I pack up my existing install of GHC 7.8.3 so that platform.sh will accept it? The built GHC does not have a 'cabal' command, so the cabal commands in platform.sh are falling back to $PATH, which I can configure to be either of the other installed versions (2013.2/7.6.3 or 2012.4/7.4.2). It doesn't seem to make a difference: neither one recognizes 'cabal --sandbox'. Both result in complaints that I should run 'cd hptool ; cabal install --only-dependencies', which I've done, repeatedly. platform.sh never gets past that point. If I run the commands in platform.sh by hand, I get to 'cd hptool; cabal build', which errors out: "cabal-1.16.0.2: Run the 'configure' command first.". But there is no 'configure' command available in the hptool directory. I'm now stuck. How do I build Haskell Platform 2014 on RHEL 6?
What is the official way to install Haskell Platform 2014, from source, on Red Hat? I am trying to install Haskell Platform 2014.2.0.0 from source on Red Hat Enterprise Linux 6.5. I have a functional install of Haskell Platform 2012.4.0.0 and GHC 7.4.2 from two years ago, plus a recently-installed Haskell Platform 2013.2.0.0 and GHC 7.6.3 from JustHub. I've built GHC 7.8.3 from source, but it keeps coming up with seven failures in the test suite. I have no idea if these test failures are innocuous or not. (The test failures are not relevant to my question, but they may become significant later.) I unpack the source tarball of 2014.2.0.0, read the README. It says that the way to build this iteration of Haskell is with a shell script, which is invoked: ./platform.sh $PATH_TO_GHC_BINDIST_TARBALL I don't have a GHC binary distribution tarball. So far as I am able to tell, there is no binary distribution tarball of GHC 7.8.3 for any version of Red Hat Enterprise Linux. I have a built GHC 7.8.3. How do I tell platform.sh -- or whatever is underneath it -- that there is no tarball, and it should just use what's in $PATH? Alternately, how do I pack up my existing install of GHC 7.8.3 so that platform.sh will accept it? The built GHC does not have a 'cabal' command, so the cabal commands in platform.sh are falling back to $PATH, which I can configure to be either of the other installed versions (2013.2/7.6.3 or 2012.4/7.4.2). It doesn't seem to make a difference: neither one recognizes 'cabal --sandbox'. Both result in complaints that I should run 'cd hptool ; cabal install --only-dependencies', which I've done, repeatedly. platform.sh never gets past that point. If I run the commands in platform.sh by hand, I get to 'cd hptool; cabal build', which errors out: "cabal-1.16.0.2: Run the 'configure' command first.". But there is no 'configure' command available in the hptool directory. I'm now stuck. How do I build Haskell Platform 2014 on RHEL 6?
haskell, installation, redhat, rhel, platform
7
1,592
2
https://stackoverflow.com/questions/25924313/what-is-the-official-way-to-install-haskell-platform-2014-from-source-on-red-h
23,418,223
&quot;Missing separate debuginfos&quot; in non-root account
I have the same problem as reported here: Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.47.el6_2.9.i686 libgcc-4.4.6-3.el6.i686 libstdc++-4.4.6-3.el6.i686 However, I am not the root user so I can't just run debuginfo-install ... . I was wondering if there's a relatively easy way for me to get these libraries and add a Path to them in my home directory without using a root account.
&quot;Missing separate debuginfos&quot; in non-root account I have the same problem as reported here: Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.47.el6_2.9.i686 libgcc-4.4.6-3.el6.i686 libstdc++-4.4.6-3.el6.i686 However, I am not the root user so I can't just run debuginfo-install ... . I was wondering if there's a relatively easy way for me to get these libraries and add a Path to them in my home directory without using a root account.
gdb, redhat
7
1,316
1
https://stackoverflow.com/questions/23418223/missing-separate-debuginfos-in-non-root-account
17,266,261
gedit unresponsive how to save my file?
I have created a document and have not saved it yet. The gedit has become unresponsive. Is there anyway I can save or get the content of my file before killing the process?
gedit unresponsive how to save my file? I have created a document and have not saved it yet. The gedit has become unresponsive. Is there anyway I can save or get the content of my file before killing the process?
linux, redhat, gedit
7
1,586
2
https://stackoverflow.com/questions/17266261/gedit-unresponsive-how-to-save-my-file
1,977,306
Python Module To Detect Linux Distro Version
Is there an existing python module that can be used to detect which distro of Linux and which version of the distro is currently installed. For example: RedHat Enterprise 5 Fedora 11 Suse Enterprise 11 etc.... I can make my own module by parsing various files like /etc/redhat-release but I was wondering if a module already exists? Cheers, Ivan
Python Module To Detect Linux Distro Version Is there an existing python module that can be used to detect which distro of Linux and which version of the distro is currently installed. For example: RedHat Enterprise 5 Fedora 11 Suse Enterprise 11 etc.... I can make my own module by parsing various files like /etc/redhat-release but I was wondering if a module already exists? Cheers, Ivan
python, redhat, suse
6
8,414
4
https://stackoverflow.com/questions/1977306/python-module-to-detect-linux-distro-version
141,707
How to set CPU load on a Red Hat Linux box?
I have a RHEL box that I need to put under a moderate and variable amount of CPU load (50%-75%). What is the best way to go about this? Is there a program that can do this that I am not aware of? I am happy to write some C code to make this happen, I just don't know what system calls will help.
How to set CPU load on a Red Hat Linux box? I have a RHEL box that I need to put under a moderate and variable amount of CPU load (50%-75%). What is the best way to go about this? Is there a program that can do this that I am not aware of? I am happy to write some C code to make this happen, I just don't know what system calls will help.
linux, load, redhat, cpu-cycles
6
19,937
10
https://stackoverflow.com/questions/141707/how-to-set-cpu-load-on-a-red-hat-linux-box
10,572,035
Install GD Library on RedHat machine for twiki
My ultimate goal is to run a twiki website for my research group. I have space on RedHat server that is running Apache, etc., but upon which I do not have root access. Since I cannot install perl modules with the current permissions, I've decided to manually install a local version of perl. Got that working no problem. The following modules are required to get twiki to work: FreezeThaw - [URL] CGI::Session - [URL] Error - [URL] GD - [URL] HTML::Tree - [URL] Time-modules - [URL] I have installed FreezeThaw, CGI, Error, and it fails on GD with the following error: UNRECOVERABLE ERROR Could not find gdlib-config in the search path. Please install libgd 2.0.28 or higher. If you want to try to compile anyway, please rerun this script with the option --ignore_missing_gd. In searching for how to get around this newest obstacle, I found a previous SO question: How to install GD library with Strawberry Perl asked about installing this and the top answer suggested manually compiling gdlib. You'll note, however, that that link is broken. The base site: [URL] is basically down saying to go to the project's bitbucket page. So I got the tarball from that page and am trying to install it. The following problems occur when I follow the instructions included. README.TXT says: "If the sources have been fetched from CVS, run bootstrap.sh [options]." Running bootstrap.sh yields: configure.ac:64: warning: macro AM_ICONV' not found in library configure.ac:10: required directory ./config does not exist cp: cannot create regular file config/config.guess': No such file or directory configure.ac:11: installing config/config.guess' configure.ac:11: error while copying cp: cannot create regular file config/config.sub': No such file or directory configure.ac:11: installing config/config.sub' configure.ac:11: error while copying cp: cannot create regular file config/install-sh': No such file or directory configure.ac:28: installing config/install-sh' configure.ac:28: error while copying cp: cannot create regular file config/missing': No such file or directory configure.ac:28: installing config/missing' configure.ac:28: error while copying configure.ac:577: required file config/Makefile.in' not found configure.ac:577: required file config/gdlib-config.in' not found configure.ac:577: required file test/Makefile.in' not found Makefile.am:14: Libtool library used but LIBTOOL' is undefined Makefile.am:14: The usual way to define LIBTOOL' is to add AC_PROG_LIBTOOL' Makefile.am:14: to configure.ac' and run aclocal' and autoconf' again. Makefile.am:14: If AC_PROG_LIBTOOL' is in configure.ac', make sure Makefile.am:14: its definition is in aclocal's search path. cp: cannot create regular file config/depcomp': No such file or directory Makefile.am: installing config/depcomp' Makefile.am: error while copying Failed And it says I should also install the following 3rd party libraries: zlib, available from [URL] Data compression library libpng, available from [URL] Portable Network Graphics library; requires zlib FreeType 2.x, available from [URL] Free, high-quality, and portable font engine JPEG library, available from [URL] Portable JPEG compression/decompression library XPM, available from [URL] X Pixmap library Which I am ignoring for now. Switching to the generic instructions it says follow the advice in the INSTALL file; which says: "cd to the directory containing the package's source code and type ./configure to configure the package for your system." Which flat does not work: I've cd'ed into every directory of the tarball and running that command does nothing. So, trying to install twiki required me to install perl, which required me to install the perl modules: FreezeThaw, CGI, Error, HTML, Time-modules, and GD -- which itself required me to install gdlib -- which further suggested I install zlib, libpng, FreeType 2.x, JPEG library, and XPM. And of course, I'm stuck at the installing gdlib stage. My question is : what other process can possibly demean humanity to such a level? I cannot fathom the depths of cruelty that lay ahead of me as I dive ever deeper into this misery onion. Should I just end it all? Can meaning be brought from this madness? Will the sun come up tomorrow, and if so, does it even matter? But seriously, any suggestions on what to do differently/better would be much appreciated -- I can't remember what a child's laughter sounds like anymore.
Install GD Library on RedHat machine for twiki My ultimate goal is to run a twiki website for my research group. I have space on RedHat server that is running Apache, etc., but upon which I do not have root access. Since I cannot install perl modules with the current permissions, I've decided to manually install a local version of perl. Got that working no problem. The following modules are required to get twiki to work: FreezeThaw - [URL] CGI::Session - [URL] Error - [URL] GD - [URL] HTML::Tree - [URL] Time-modules - [URL] I have installed FreezeThaw, CGI, Error, and it fails on GD with the following error: UNRECOVERABLE ERROR Could not find gdlib-config in the search path. Please install libgd 2.0.28 or higher. If you want to try to compile anyway, please rerun this script with the option --ignore_missing_gd. In searching for how to get around this newest obstacle, I found a previous SO question: How to install GD library with Strawberry Perl asked about installing this and the top answer suggested manually compiling gdlib. You'll note, however, that that link is broken. The base site: [URL] is basically down saying to go to the project's bitbucket page. So I got the tarball from that page and am trying to install it. The following problems occur when I follow the instructions included. README.TXT says: "If the sources have been fetched from CVS, run bootstrap.sh [options]." Running bootstrap.sh yields: configure.ac:64: warning: macro AM_ICONV' not found in library configure.ac:10: required directory ./config does not exist cp: cannot create regular file config/config.guess': No such file or directory configure.ac:11: installing config/config.guess' configure.ac:11: error while copying cp: cannot create regular file config/config.sub': No such file or directory configure.ac:11: installing config/config.sub' configure.ac:11: error while copying cp: cannot create regular file config/install-sh': No such file or directory configure.ac:28: installing config/install-sh' configure.ac:28: error while copying cp: cannot create regular file config/missing': No such file or directory configure.ac:28: installing config/missing' configure.ac:28: error while copying configure.ac:577: required file config/Makefile.in' not found configure.ac:577: required file config/gdlib-config.in' not found configure.ac:577: required file test/Makefile.in' not found Makefile.am:14: Libtool library used but LIBTOOL' is undefined Makefile.am:14: The usual way to define LIBTOOL' is to add AC_PROG_LIBTOOL' Makefile.am:14: to configure.ac' and run aclocal' and autoconf' again. Makefile.am:14: If AC_PROG_LIBTOOL' is in configure.ac', make sure Makefile.am:14: its definition is in aclocal's search path. cp: cannot create regular file config/depcomp': No such file or directory Makefile.am: installing config/depcomp' Makefile.am: error while copying Failed And it says I should also install the following 3rd party libraries: zlib, available from [URL] Data compression library libpng, available from [URL] Portable Network Graphics library; requires zlib FreeType 2.x, available from [URL] Free, high-quality, and portable font engine JPEG library, available from [URL] Portable JPEG compression/decompression library XPM, available from [URL] X Pixmap library Which I am ignoring for now. Switching to the generic instructions it says follow the advice in the INSTALL file; which says: "cd to the directory containing the package's source code and type ./configure to configure the package for your system." Which flat does not work: I've cd'ed into every directory of the tarball and running that command does nothing. So, trying to install twiki required me to install perl, which required me to install the perl modules: FreezeThaw, CGI, Error, HTML, Time-modules, and GD -- which itself required me to install gdlib -- which further suggested I install zlib, libpng, FreeType 2.x, JPEG library, and XPM. And of course, I'm stuck at the installing gdlib stage. My question is : what other process can possibly demean humanity to such a level? I cannot fathom the depths of cruelty that lay ahead of me as I dive ever deeper into this misery onion. Should I just end it all? Can meaning be brought from this madness? Will the sun come up tomorrow, and if so, does it even matter? But seriously, any suggestions on what to do differently/better would be much appreciated -- I can't remember what a child's laughter sounds like anymore.
perl, redhat, gdlib, twiki
6
15,865
2
https://stackoverflow.com/questions/10572035/install-gd-library-on-redhat-machine-for-twiki
53,605,666
Cannot run command in Docker container
I'm trying to execute bash in my docker container called "bind" via docker exec -it bind bash I'm getting the following error message: rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:110: decoding init error from pipe caused \"read parent: connection reset by peer\"" There's nothing extraordinary in the logs. Restarting docker or the container seemed to have no effect. I also made sure that there's enough space on the hard drive. Starting any other binary in the container yields the same error. version info: docker --version: Docker version 1.13.1, build 07f3374/1.13.1 OS: cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core) Any help would be appreciated.
Cannot run command in Docker container I'm trying to execute bash in my docker container called "bind" via docker exec -it bind bash I'm getting the following error message: rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:110: decoding init error from pipe caused \"read parent: connection reset by peer\"" There's nothing extraordinary in the logs. Restarting docker or the container seemed to have no effect. I also made sure that there's enough space on the hard drive. Starting any other binary in the container yields the same error. version info: docker --version: Docker version 1.13.1, build 07f3374/1.13.1 OS: cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core) Any help would be appreciated.
docker, centos, redhat
6
8,609
7
https://stackoverflow.com/questions/53605666/cannot-run-command-in-docker-container
51,255,738
Compile C++17 code on RedHat Linux Enterprise Developer Workstation
I've googled around and couldn't find a clear way to compile c++17 source code on a Red Hat Enterprise Linux 7.5 Developer Workstation. I've been able to successfully compile C++17 source code on Fedora using the following command: g++ -std=c++1z main.cpp -o main I tried the same thing on my Red Hat workstation and received a message that says g++ -std=c++1z is not a recognized command. Any help or guidance is appreciated.
Compile C++17 code on RedHat Linux Enterprise Developer Workstation I've googled around and couldn't find a clear way to compile c++17 source code on a Red Hat Enterprise Linux 7.5 Developer Workstation. I've been able to successfully compile C++17 source code on Fedora using the following command: g++ -std=c++1z main.cpp -o main I tried the same thing on my Red Hat workstation and received a message that says g++ -std=c++1z is not a recognized command. Any help or guidance is appreciated.
c++, linux, redhat, c++17
6
10,134
2
https://stackoverflow.com/questions/51255738/compile-c17-code-on-redhat-linux-enterprise-developer-workstation
30,617,357
Unable to connect to Postgres via PHP but can connect from command line and PgAdmin on different machine
I've had a quick search around (about 30 minutes) and tried a few bits, but nothing seems to work. Also please note I'm no Linux expert (I can do most basic stuff, simple installs, configurations etc) so some of the config I have may be obviously wrong, but I just don't see it! (feel free to correct any of the configs below) The Setup I have a running instance of PostgreSQL 9.3 on a Red Hat Enterprise Linux Server release 7.1 (Maipo) box. It's also running SELinux and IPTables. IPTables config (added in 80, 443 and 5432.. and also 22, but that was done before...) # sample configuration for iptables service # you can edit this manually or use system-config-firewall # please do not ask us to add additional ports/services to this default configuration *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 5432 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 80 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 443 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT PostgreSQL pg_hba.cong (deleted all comments) # TYPE DATABASE USER ADDRESS METHOD local all all ident host all all 127.0.0.1/32 md5 host all all ::1/128 md5 host all all 0.0.0.0/0 md5 postgresql.conf (only changed the listen address) listen_addresses = '*' Setup new users $ sudo -u postgres /usr/pgsql-9.3/bin/createuser -s "pgadmin" $ sudo -u postgres /usr/pgsql-9.3/bin/createuser "webuser" $ sudo -u postgres psql postgres=# ALTER ROLE "pgadmin" WITH PASSWORD 'weakpassword'; ALTER ROLE postgres=# ALTER ROLE "webuser" WITH PASSWORD 'anotherweakpassword'; ALTER ROLE postgres=# \q Test connection psql -U [pgadmin|webuser] -h [localhost|127.0.0.1|hostname] -W postgres Password for user [pgadmin|webuser]: [weakpassword|anotherweakpassword] psql (9.3.7) Type "help" for help. postgres=# \q As you can see I tested 127.0.0.1, localhost and the hostname on the command line to make sure I could connect use all three identifiers with both different accounts. I've also connected using PgAdmin from my windows box, and it connects using the hostname and ip address using both users. The problem... The problem comes when I try to connect from PHP via Apache (it doesn't happen if I run the same script on the command line) PHP Test Script <?php error_reporting( E_ALL ); ini_set('display_errors', '1'); $conn1 = pg_connect("host='localhost' port='5432' user='pgadmin' password='weakpassword' dbname='postgres'"); $conn2 = pg_connect("host='127.0.0.1' port='5432' user='pgadmin' password='weakpassword' dbname='postgres'"); $conn3 = pg_connect("host='localhost' port='5432' user='webuser' password='anotherweakpassword' dbname='postgres'"); $conn4 = pg_connect("host='127.0.0.1' port='5432' user='webuser' password='anotherweakpassword' dbname='postgres'"); $status1 = pg_connection_status( $conn1 ); $status2 = pg_connection_status( $conn2 ); $status3 = pg_connection_status( $conn3 ); $status4 = pg_connection_status( $conn4 ); # Check connection if ( $status1 === false || $status1 === PGSQL_CONNECTION_BAD || $status2 === false || $status2 === PGSQL_CONNECTION_BAD || $status3 === false || $status3 === PGSQL_CONNECTION_BAD || $status4 === false || $status4 === PGSQL_CONNECTION_BAD ) { throw new Exception("I'm broken"); } # Do a query $res1 = pg_query( $conn1, "SELECT * FROM pg_type LIMIT 1" ); $res2 = pg_query( $conn2, "SELECT * FROM pg_type LIMIT 1" ); $res3 = pg_query( $conn3, "SELECT * FROM pg_type LIMIT 1" ); $res4 = pg_query( $conn4, "SELECT * FROM pg_type LIMIT 1" ); # Test one result. $row1 = pg_fetch_row($res1); $row2 = pg_fetch_row($res2); $row3 = pg_fetch_row($res3); $row4 = pg_fetch_row($res4); echo $row1[0] . "\n"; echo $row2[0] . "\n"; echo $row3[0] . "\n"; echo $row4[0] . "\n"; On the command line I get the following output (as expected) bool bool bool bool But in the browser I get the following Warning: pg_connect(): Unable to connect to PostgreSQL server: could not connect to server: Permission denied Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? could not connect to server: Permission denied Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432? in /var/www/html/test.php on line 6 Warning: pg_connect(): Unable to connect to PostgreSQL server: could not connect to server: Permission denied Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5432? in /var/www/html/test.php on line 7 Warning: pg_connect(): Unable to connect to PostgreSQL server: could not connect to server: Permission denied Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? could not connect to server: Permission denied Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432? in /var/www/html/test.php on line 8 Warning: pg_connect(): Unable to connect to PostgreSQL server: could not connect to server: Permission denied Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5432? in /var/www/html/test.php on line 9 Fatal error: Uncaught exception 'Exception' with message 'I'm broken' in /var/www/html/test.php:25 Stack trace: #0 {main} thrown in /var/www/html/test.php on line 25 I've got a feeling it's something to do with IPTables not allowing the connect when coming through Apache for some reason, but I'm stumped (I bet it's stupidly simple) I think that covers everything... Help me Stack Overflow, you're my only hope!
Unable to connect to Postgres via PHP but can connect from command line and PgAdmin on different machine I've had a quick search around (about 30 minutes) and tried a few bits, but nothing seems to work. Also please note I'm no Linux expert (I can do most basic stuff, simple installs, configurations etc) so some of the config I have may be obviously wrong, but I just don't see it! (feel free to correct any of the configs below) The Setup I have a running instance of PostgreSQL 9.3 on a Red Hat Enterprise Linux Server release 7.1 (Maipo) box. It's also running SELinux and IPTables. IPTables config (added in 80, 443 and 5432.. and also 22, but that was done before...) # sample configuration for iptables service # you can edit this manually or use system-config-firewall # please do not ask us to add additional ports/services to this default configuration *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 5432 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 80 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 443 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT PostgreSQL pg_hba.cong (deleted all comments) # TYPE DATABASE USER ADDRESS METHOD local all all ident host all all 127.0.0.1/32 md5 host all all ::1/128 md5 host all all 0.0.0.0/0 md5 postgresql.conf (only changed the listen address) listen_addresses = '*' Setup new users $ sudo -u postgres /usr/pgsql-9.3/bin/createuser -s "pgadmin" $ sudo -u postgres /usr/pgsql-9.3/bin/createuser "webuser" $ sudo -u postgres psql postgres=# ALTER ROLE "pgadmin" WITH PASSWORD 'weakpassword'; ALTER ROLE postgres=# ALTER ROLE "webuser" WITH PASSWORD 'anotherweakpassword'; ALTER ROLE postgres=# \q Test connection psql -U [pgadmin|webuser] -h [localhost|127.0.0.1|hostname] -W postgres Password for user [pgadmin|webuser]: [weakpassword|anotherweakpassword] psql (9.3.7) Type "help" for help. postgres=# \q As you can see I tested 127.0.0.1, localhost and the hostname on the command line to make sure I could connect use all three identifiers with both different accounts. I've also connected using PgAdmin from my windows box, and it connects using the hostname and ip address using both users. The problem... The problem comes when I try to connect from PHP via Apache (it doesn't happen if I run the same script on the command line) PHP Test Script <?php error_reporting( E_ALL ); ini_set('display_errors', '1'); $conn1 = pg_connect("host='localhost' port='5432' user='pgadmin' password='weakpassword' dbname='postgres'"); $conn2 = pg_connect("host='127.0.0.1' port='5432' user='pgadmin' password='weakpassword' dbname='postgres'"); $conn3 = pg_connect("host='localhost' port='5432' user='webuser' password='anotherweakpassword' dbname='postgres'"); $conn4 = pg_connect("host='127.0.0.1' port='5432' user='webuser' password='anotherweakpassword' dbname='postgres'"); $status1 = pg_connection_status( $conn1 ); $status2 = pg_connection_status( $conn2 ); $status3 = pg_connection_status( $conn3 ); $status4 = pg_connection_status( $conn4 ); # Check connection if ( $status1 === false || $status1 === PGSQL_CONNECTION_BAD || $status2 === false || $status2 === PGSQL_CONNECTION_BAD || $status3 === false || $status3 === PGSQL_CONNECTION_BAD || $status4 === false || $status4 === PGSQL_CONNECTION_BAD ) { throw new Exception("I'm broken"); } # Do a query $res1 = pg_query( $conn1, "SELECT * FROM pg_type LIMIT 1" ); $res2 = pg_query( $conn2, "SELECT * FROM pg_type LIMIT 1" ); $res3 = pg_query( $conn3, "SELECT * FROM pg_type LIMIT 1" ); $res4 = pg_query( $conn4, "SELECT * FROM pg_type LIMIT 1" ); # Test one result. $row1 = pg_fetch_row($res1); $row2 = pg_fetch_row($res2); $row3 = pg_fetch_row($res3); $row4 = pg_fetch_row($res4); echo $row1[0] . "\n"; echo $row2[0] . "\n"; echo $row3[0] . "\n"; echo $row4[0] . "\n"; On the command line I get the following output (as expected) bool bool bool bool But in the browser I get the following Warning: pg_connect(): Unable to connect to PostgreSQL server: could not connect to server: Permission denied Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? could not connect to server: Permission denied Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432? in /var/www/html/test.php on line 6 Warning: pg_connect(): Unable to connect to PostgreSQL server: could not connect to server: Permission denied Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5432? in /var/www/html/test.php on line 7 Warning: pg_connect(): Unable to connect to PostgreSQL server: could not connect to server: Permission denied Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? could not connect to server: Permission denied Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432? in /var/www/html/test.php on line 8 Warning: pg_connect(): Unable to connect to PostgreSQL server: could not connect to server: Permission denied Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5432? in /var/www/html/test.php on line 9 Fatal error: Uncaught exception 'Exception' with message 'I'm broken' in /var/www/html/test.php:25 Stack trace: #0 {main} thrown in /var/www/html/test.php on line 25 I've got a feeling it's something to do with IPTables not allowing the connect when coming through Apache for some reason, but I'm stumped (I bet it's stupidly simple) I think that covers everything... Help me Stack Overflow, you're my only hope!
apache, postgresql, redhat, iptables, postgresql-9.3
6
7,790
1
https://stackoverflow.com/questions/30617357/unable-to-connect-to-postgres-via-php-but-can-connect-from-command-line-and-pgad
31,349,438
Can&#39;t install modules &#39;os&#39; and &#39;os.path&#39;
I am trying to install 'os' module and 'os.path' module on a red hat machine. I tries following commands. pip install os yum install os But I keep gettin the following error Could not find a version that satisfies the requirement os.path (from versions: ) No matching distribution found for os.path I am able to install other modules using aforementioned command but not able to install these. I need to install both os and os.path. Using version python 3.4.3
Can&#39;t install modules &#39;os&#39; and &#39;os.path&#39; I am trying to install 'os' module and 'os.path' module on a red hat machine. I tries following commands. pip install os yum install os But I keep gettin the following error Could not find a version that satisfies the requirement os.path (from versions: ) No matching distribution found for os.path I am able to install other modules using aforementioned command but not able to install these. I need to install both os and os.path. Using version python 3.4.3
python, python-3.x, redhat
6
48,895
2
https://stackoverflow.com/questions/31349438/cant-install-modules-os-and-os-path
48,672,892
Update yum package using localinstall
If a package is installed using yum localinstall like this: yum -y localinstall --nogpgcheck some-package-1.0.0.rpm And now, if I try to run: yum -y localinstall --nogpgcheck some-package-2.0.0.rpm Will it replace the entire old version with the new one or does it maintain both the versions?
Update yum package using localinstall If a package is installed using yum localinstall like this: yum -y localinstall --nogpgcheck some-package-1.0.0.rpm And now, if I try to run: yum -y localinstall --nogpgcheck some-package-2.0.0.rpm Will it replace the entire old version with the new one or does it maintain both the versions?
linux, redhat, yum, rhel
6
28,712
2
https://stackoverflow.com/questions/48672892/update-yum-package-using-localinstall
51,110,903
Get the error &quot;Failed to execute operation: Bad message&quot; when enable tomcat.service
When I execute the following command on RedHat 7.4 x64 terminal, (just enable tomcat.service) sudo systemctl enable tomcat.service Get the following error message: Failed to execute operation: Bad message Do you have any idea or suggestion what I can check further? Thank you.
Get the error &quot;Failed to execute operation: Bad message&quot; when enable tomcat.service When I execute the following command on RedHat 7.4 x64 terminal, (just enable tomcat.service) sudo systemctl enable tomcat.service Get the following error message: Failed to execute operation: Bad message Do you have any idea or suggestion what I can check further? Thank you.
linux, redhat, tomcat8, systemd
6
19,587
4
https://stackoverflow.com/questions/51110903/get-the-error-failed-to-execute-operation-bad-message-when-enable-tomcat-serv
50,400,791
Install unixODBC &gt;= 2.3.1 on Linux Redhat/CentOS for msodbcsql17
I try to install msodbcsql17 on AWS EC2 with CentOS/RedHat (Linux). These are the steps, I have followed, from Microsoft ( LINK ): sudo su #Download appropriate package for the OS version #Choose only ONE of the following, corresponding to your OS version #RedHat Enterprise Server 6 curl [URL] > /etc/yum.repos.d/mssql-release.repo #RedHat Enterprise Server 7 curl [URL] > /etc/yum.repos.d/mssql-release.repo exit sudo yum remove unixODBC-utf16 unixODBC-utf16-devel #to avoid conflicts sudo ACCEPT_EULA=Y yum install msodbcsql17 # optional: for bcp and sqlcmd sudo ACCEPT_EULA=Y yum install mssql-tools echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc source ~/.bashrc # optional: for unixODBC development headers sudo yum install unixODBC-devel The instruction work until the installation of msodbcsql17. I get the following error message: Error: Package: msodbcsql17 (packages-microsoft-com-prod) Requires: unixODBC >= 2.3.1 Available: unixODBC-2.2.14-14.7.amzn1.i686 (amzn-main) unixODBC = 2.2.14-14.7.amzn1 I think the problem is, that the maximum available version of unixODBC is less then 2.3.1, but how can I install msodbcsql17, to connect with Microsoft?
Install unixODBC &gt;= 2.3.1 on Linux Redhat/CentOS for msodbcsql17 I try to install msodbcsql17 on AWS EC2 with CentOS/RedHat (Linux). These are the steps, I have followed, from Microsoft ( LINK ): sudo su #Download appropriate package for the OS version #Choose only ONE of the following, corresponding to your OS version #RedHat Enterprise Server 6 curl [URL] > /etc/yum.repos.d/mssql-release.repo #RedHat Enterprise Server 7 curl [URL] > /etc/yum.repos.d/mssql-release.repo exit sudo yum remove unixODBC-utf16 unixODBC-utf16-devel #to avoid conflicts sudo ACCEPT_EULA=Y yum install msodbcsql17 # optional: for bcp and sqlcmd sudo ACCEPT_EULA=Y yum install mssql-tools echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc source ~/.bashrc # optional: for unixODBC development headers sudo yum install unixODBC-devel The instruction work until the installation of msodbcsql17. I get the following error message: Error: Package: msodbcsql17 (packages-microsoft-com-prod) Requires: unixODBC >= 2.3.1 Available: unixODBC-2.2.14-14.7.amzn1.i686 (amzn-main) unixODBC = 2.2.14-14.7.amzn1 I think the problem is, that the maximum available version of unixODBC is less then 2.3.1, but how can I install msodbcsql17, to connect with Microsoft?
amazon-ec2, centos, redhat, pyodbc, unixodbc
6
15,772
2
https://stackoverflow.com/questions/50400791/install-unixodbc-2-3-1-on-linux-redhat-centos-for-msodbcsql17
18,832,802
How can I check if ncurses is installed?
How can I check if ncurses is installed in a Red-Hat Linux OS? One solution is to use dpkg -l '*ncurses*' | grep '^ii' But I don't even have the dpkg package in my system, and since I don't have the administrative rights, I can't install it.
How can I check if ncurses is installed? How can I check if ncurses is installed in a Red-Hat Linux OS? One solution is to use dpkg -l '*ncurses*' | grep '^ii' But I don't even have the dpkg package in my system, and since I don't have the administrative rights, I can't install it.
linux, redhat
6
16,356
2
https://stackoverflow.com/questions/18832802/how-can-i-check-if-ncurses-is-installed
74,280,824
Cannot Find libatomic.so.1
I'm trying to build snappy, but I end up getting the error error while loading shared libraries: libatomic.so.1: cannot open shared object file: No such file or directory When I go look in /lib/gcc/x86_64-redhat-linux/8/ I do find a file libatomic.so Which has the contents INPUT ( /usr/lib64/libatomic.so.1.2.0 ) then if I go looking in /usr/lb64/ only these files exist libatomic_ops_gpl.so.1 libatomic_ops_gpl.so.1.1.2 libatomic_ops.so.1 libatomic_ops.so.1.1.1 I try doing yum install libatomic_ops.x86_64 , it says nothing to do. That is the only package that comes up when doing yum search libatomic . I'm confused with how to solve this issue. Thanks! For what it matters, this is a redhat 8.6 machine.
Cannot Find libatomic.so.1 I'm trying to build snappy, but I end up getting the error error while loading shared libraries: libatomic.so.1: cannot open shared object file: No such file or directory When I go look in /lib/gcc/x86_64-redhat-linux/8/ I do find a file libatomic.so Which has the contents INPUT ( /usr/lib64/libatomic.so.1.2.0 ) then if I go looking in /usr/lb64/ only these files exist libatomic_ops_gpl.so.1 libatomic_ops_gpl.so.1.1.2 libatomic_ops.so.1 libatomic_ops.so.1.1.1 I try doing yum install libatomic_ops.x86_64 , it says nothing to do. That is the only package that comes up when doing yum search libatomic . I'm confused with how to solve this issue. Thanks! For what it matters, this is a redhat 8.6 machine.
gcc, cmake, redhat, atomic
6
20,802
1
https://stackoverflow.com/questions/74280824/cannot-find-libatomic-so-1
72,660,240
Moving over to red hat ubi-minimal
Bit of a newb question. I'm currently running off Red Hat UBI 8 and am looking to move to Red Hat 8 UBI-Minimal. My current docker file has something like this in it: RUN groupadd -r -g 1000 myuser \ && useradd -r -u 1000 -g myuser -m -d /opt/myuser -s /bin/bash myuser RUN mkdir /deployments \ && chmod 755 /deployments \ && chown -R myuser /deployments I was looking into this more and at first i thought ubi-minimal might be a "rootless" container but a simple test i ran on my local shows otherwise: docker run -p 8080:8080 -it myreg/redhat/ubi8/ubi-minimal That means I should be looking to replicate the above lines against ubi-minimal but it seems like groupadd & useradd don't exist in that image. How can i replicate docker file lines the above for the ubi-minimal image?
Moving over to red hat ubi-minimal Bit of a newb question. I'm currently running off Red Hat UBI 8 and am looking to move to Red Hat 8 UBI-Minimal. My current docker file has something like this in it: RUN groupadd -r -g 1000 myuser \ && useradd -r -u 1000 -g myuser -m -d /opt/myuser -s /bin/bash myuser RUN mkdir /deployments \ && chmod 755 /deployments \ && chown -R myuser /deployments I was looking into this more and at first i thought ubi-minimal might be a "rootless" container but a simple test i ran on my local shows otherwise: docker run -p 8080:8080 -it myreg/redhat/ubi8/ubi-minimal That means I should be looking to replicate the above lines against ubi-minimal but it seems like groupadd & useradd don't exist in that image. How can i replicate docker file lines the above for the ubi-minimal image?
docker, redhat, ubi
6
5,983
1
https://stackoverflow.com/questions/72660240/moving-over-to-red-hat-ubi-minimal
43,881,761
How can I install and run Docker CE on OpenSUSE Linux?
Since the "new" Docker release where CE and EE diverged from the single unified Docker, Docker doesn't officialy support or provide installation instructions for using CE on OpenSUSE, SLES or Redhat, those distros are EE-only. I find this to be a bit of a short-sighted decision on the part of Docker - CE should be available for all platforms that EE is available for. How can I install the latest version of Docker CE on OpenSUSE Tumbleweed (or similar distro with an RPM-based package manager) which only has support for Docker EE?
How can I install and run Docker CE on OpenSUSE Linux? Since the "new" Docker release where CE and EE diverged from the single unified Docker, Docker doesn't officialy support or provide installation instructions for using CE on OpenSUSE, SLES or Redhat, those distros are EE-only. I find this to be a bit of a short-sighted decision on the part of Docker - CE should be available for all platforms that EE is available for. How can I install the latest version of Docker CE on OpenSUSE Tumbleweed (or similar distro with an RPM-based package manager) which only has support for Docker EE?
linux, docker, redhat, opensuse, sles
6
9,052
1
https://stackoverflow.com/questions/43881761/how-can-i-install-and-run-docker-ce-on-opensuse-linux
37,588,417
Deploy Django project on RedHat
I'm trying to deploy my local Django project on my RedHat server. So I install all the libraries and dependencies that I needed (also mod_wsgi). So, I edit my project's settings and move my local project to the server. But I'm facing an issue: when I try to reach the URL of my project, I have the explorer view. I also edit the httpd.conf file: WSGIScriptAlias /var/www/html/virtualEnv/ /var/www/html/virtualEnv/ThirdPartyApplications/ThirdPartyApplications/wsgi.py WSGIPythonPath /var/www/html/virtualEnv/ThirdPartyApplications/:/var/www/html/virtualEnv/lib/python2.7/site-packages WSGIDaemonProcess [URL] python-path=/var/www/html/virtualEnv/ThirdPartyApplications/:/var/www/html/virtualEnv/lib/python2.7/site-packages WSGIProcessGroup [URL] <Directory /var/www/html/virtualEnv/ThirdPartyApplications/> <Files wsgi.py> Order deny,allow Allow from all </Files> </Directory> EDIT : @FlipperPA So far, I'm running this conf in my /etc/httpd/conf.d/djangoproject.conf : WSGISocketPrefix /var/run/wsgi NameVirtualHost *:448 Listen 448 ServerName [URL] ErrorLog /home/myuser/apache_errors.log WSGIDaemonProcess MyApp python-path=/var/www/html/MyApp:/var/www/html/MyApp/MyApp/lib/python2.7/site-packages WSGIProcessGroup MyApp WSGIScriptAlias /MyApp /home/user/MyApp/MyApp/wsgi.py Alias /static /var/www/html/MyApp/MyApp/static
Deploy Django project on RedHat I'm trying to deploy my local Django project on my RedHat server. So I install all the libraries and dependencies that I needed (also mod_wsgi). So, I edit my project's settings and move my local project to the server. But I'm facing an issue: when I try to reach the URL of my project, I have the explorer view. I also edit the httpd.conf file: WSGIScriptAlias /var/www/html/virtualEnv/ /var/www/html/virtualEnv/ThirdPartyApplications/ThirdPartyApplications/wsgi.py WSGIPythonPath /var/www/html/virtualEnv/ThirdPartyApplications/:/var/www/html/virtualEnv/lib/python2.7/site-packages WSGIDaemonProcess [URL] python-path=/var/www/html/virtualEnv/ThirdPartyApplications/:/var/www/html/virtualEnv/lib/python2.7/site-packages WSGIProcessGroup [URL] <Directory /var/www/html/virtualEnv/ThirdPartyApplications/> <Files wsgi.py> Order deny,allow Allow from all </Files> </Directory> EDIT : @FlipperPA So far, I'm running this conf in my /etc/httpd/conf.d/djangoproject.conf : WSGISocketPrefix /var/run/wsgi NameVirtualHost *:448 Listen 448 ServerName [URL] ErrorLog /home/myuser/apache_errors.log WSGIDaemonProcess MyApp python-path=/var/www/html/MyApp:/var/www/html/MyApp/MyApp/lib/python2.7/site-packages WSGIProcessGroup MyApp WSGIScriptAlias /MyApp /home/user/MyApp/MyApp/wsgi.py Alias /static /var/www/html/MyApp/MyApp/static
python, django, apache, mod-wsgi, redhat
6
2,530
2
https://stackoverflow.com/questions/37588417/deploy-django-project-on-redhat
6,138,592
Condition supplied in cron job returns &quot;No such file or directory&quot;
I am attempting to execute this code in a cron job: a=/home/mailmark/node/bin/forever list; if [ "$a" == "No forever processes running" ]; then forever start /api.js; fi The file in question, 'forever' contains this shebang: #!usr/bin/env node It returns this response: /usr/bin/env: node: No such file or directory But I have this code on the last line of the .bashrc file: export PATH=/home/mailmark/node/bin:$PATH What should I do to make my cron work?
Condition supplied in cron job returns &quot;No such file or directory&quot; I am attempting to execute this code in a cron job: a=/home/mailmark/node/bin/forever list; if [ "$a" == "No forever processes running" ]; then forever start /api.js; fi The file in question, 'forever' contains this shebang: #!usr/bin/env node It returns this response: /usr/bin/env: node: No such file or directory But I have this code on the last line of the .bashrc file: export PATH=/home/mailmark/node/bin:$PATH What should I do to make my cron work?
bash, cron, redhat
6
4,281
3
https://stackoverflow.com/questions/6138592/condition-supplied-in-cron-job-returns-no-such-file-or-directory
24,289,096
IO Error: The Network Adapter could not establish the connection - with Oracle 11gR2. Connecting with SQL developer
I have installed Oracle 11g ON a RedHat6 linux instance, by following all the steps mentioned in " [URL] " I am trying to connect to the database from a remote machine using the sql developer. But always ending up with - " IO Error: The Network Adapter could not establish the connection ". The parameters i am using are Username: sys as sysdba Password: <oracle password> Hostname: IP address of the server on which Oracle SQL is installed. Port: 1521 SID: testdb I have also created a listener.ora file at location - "/oracle/product/11.2.0/db_1/network/admin", as it was not present before. Whose contents are - SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/oracle/product/11.2.0/db_1) (PROGRAM = extproc) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC_FOR_TESTDB)) (ADDRESS = (PROTOCOL = TCP)(HOST = 173.39.238.15)(PORT = 1521)) ) ) DEFAULT_SERVICE_LISTENER = (TESTDB) I have posted this question on dba.stackexchange too. but i need to get this resolved as soon as possible. and need help. Hence posting it here too. Can you please tell me what i might be doing wrong. Thanks. EDIT the out put of "lsnrctl status" Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_TESTDB))) TNS-12541: TNS:no listener TNS-12560: TNS:protocol adapter error TNS-00511: No listener Linux Error: 2: No such file or directory Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=173.39.238.15)(PORT=1521))) TNS-12541: TNS:no listener TNS-12560: TNS:protocol adapter error TNS-00511: No listener Linux Error: 111: Connection refused
IO Error: The Network Adapter could not establish the connection - with Oracle 11gR2. Connecting with SQL developer I have installed Oracle 11g ON a RedHat6 linux instance, by following all the steps mentioned in " [URL] " I am trying to connect to the database from a remote machine using the sql developer. But always ending up with - " IO Error: The Network Adapter could not establish the connection ". The parameters i am using are Username: sys as sysdba Password: <oracle password> Hostname: IP address of the server on which Oracle SQL is installed. Port: 1521 SID: testdb I have also created a listener.ora file at location - "/oracle/product/11.2.0/db_1/network/admin", as it was not present before. Whose contents are - SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/oracle/product/11.2.0/db_1) (PROGRAM = extproc) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC_FOR_TESTDB)) (ADDRESS = (PROTOCOL = TCP)(HOST = 173.39.238.15)(PORT = 1521)) ) ) DEFAULT_SERVICE_LISTENER = (TESTDB) I have posted this question on dba.stackexchange too. but i need to get this resolved as soon as possible. and need help. Hence posting it here too. Can you please tell me what i might be doing wrong. Thanks. EDIT the out put of "lsnrctl status" Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_TESTDB))) TNS-12541: TNS:no listener TNS-12560: TNS:protocol adapter error TNS-00511: No listener Linux Error: 2: No such file or directory Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=173.39.238.15)(PORT=1521))) TNS-12541: TNS:no listener TNS-12560: TNS:protocol adapter error TNS-00511: No listener Linux Error: 111: Connection refused
sql, oracle-database, oracle11g, database-connection, redhat
6
42,934
2
https://stackoverflow.com/questions/24289096/io-error-the-network-adapter-could-not-establish-the-connection-with-oracle-1
428,920
Changing the owner of an existing process in Linux
I would like to start tomcat (Web Server) as a privileged user, and then bring it back to an unprivileged user once it has started. Is there a way to do this programatically, or in general with Linux? Thanks.
Changing the owner of an existing process in Linux I would like to start tomcat (Web Server) as a privileged user, and then bring it back to an unprivileged user once it has started. Is there a way to do this programatically, or in general with Linux? Thanks.
linux, tomcat, redhat
6
9,792
6
https://stackoverflow.com/questions/428920/changing-the-owner-of-an-existing-process-in-linux
68,364,633
Failed to launch chrome [FATAL:zygote_host_impl_linux.cc(116)] No usable sandbox
Failed to launch chrome!\n[0702/102126.236473:FATAL:zygote_host_impl_linux.cc(116)] No usable sandbox! Update your kernel or see [URL] for more information on developing with the SUID sandbox. If you want to live dangerously and need an immediate workaround, you can try using --no-sandbox.\n#0 0x55e0286ccaf9 ... Core file will not be generated.\n\n\nTROUBLESHOOTING: [URL] at onClose (/home/ec2-user/credence/microservices/reporting-server/node_modules/puppeteer/lib/Launcher.js:342:14)\n at Interface.helper.addEventListener (/home/ec2-user/credence/microservices/reporting-server/node_modules/puppeteer/lib/Launcher.js:331:50)\n at Interface.emit (events.js:203:15)\n at Interface.close (readline.js:397:8)\n at Socket.onend (readline.js:173:10)\n at Socket.emit (events.js:203:15)\n at endReadableNT (_stream_readable.js:1143:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)
Failed to launch chrome [FATAL:zygote_host_impl_linux.cc(116)] No usable sandbox Failed to launch chrome!\n[0702/102126.236473:FATAL:zygote_host_impl_linux.cc(116)] No usable sandbox! Update your kernel or see [URL] for more information on developing with the SUID sandbox. If you want to live dangerously and need an immediate workaround, you can try using --no-sandbox.\n#0 0x55e0286ccaf9 ... Core file will not be generated.\n\n\nTROUBLESHOOTING: [URL] at onClose (/home/ec2-user/credence/microservices/reporting-server/node_modules/puppeteer/lib/Launcher.js:342:14)\n at Interface.helper.addEventListener (/home/ec2-user/credence/microservices/reporting-server/node_modules/puppeteer/lib/Launcher.js:331:50)\n at Interface.emit (events.js:203:15)\n at Interface.close (readline.js:397:8)\n at Socket.onend (readline.js:173:10)\n at Socket.emit (events.js:203:15)\n at endReadableNT (_stream_readable.js:1143:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)
node.js, linux, puppeteer, redhat
6
12,182
1
https://stackoverflow.com/questions/68364633/failed-to-launch-chrome-fatalzygote-host-impl-linux-cc116-no-usable-sandbox
4,200,970
How can I count instructions executed on Red Hat Enterprise Linux (x86-64)?
I want to find out how many x86-64 instructions are executed during a given run of a program running on Red Hat Enterprise Linux. I know I can get this information from valgrind but the slowdown is considerable. I also know that we are using Intel Core 2 Quad CPUs (model Q6700) which have hardware performance counters built in. But I don't know of any way to get access to the total number of instructions executed from within a C program.
How can I count instructions executed on Red Hat Enterprise Linux (x86-64)? I want to find out how many x86-64 instructions are executed during a given run of a program running on Red Hat Enterprise Linux. I know I can get this information from valgrind but the slowdown is considerable. I also know that we are using Intel Core 2 Quad CPUs (model Q6700) which have hardware performance counters built in. But I don't know of any way to get access to the total number of instructions executed from within a C program.
linux, x86-64, redhat, performancecounter
6
2,851
4
https://stackoverflow.com/questions/4200970/how-can-i-count-instructions-executed-on-red-hat-enterprise-linux-x86-64
47,039,896
Register to subscription manager failed in RHEL
I'm trying register subscription manager but it gives me following error. [root@localhost rhsm]# sudo subscription-manager register Registering to: subscription.rhsm.redhat.com:443/subscription Username: redhat Password: Unable to verify server's identity: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579) I tried every possible solution from redhat forums but didn't help any. Any help would be appreciated. Thanks!
Register to subscription manager failed in RHEL I'm trying register subscription manager but it gives me following error. [root@localhost rhsm]# sudo subscription-manager register Registering to: subscription.rhsm.redhat.com:443/subscription Username: redhat Password: Unable to verify server's identity: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579) I tried every possible solution from redhat forums but didn't help any. Any help would be appreciated. Thanks!
redhat
6
19,731
4
https://stackoverflow.com/questions/47039896/register-to-subscription-manager-failed-in-rhel
33,257,448
Subversion 1.9.2: Invalid filesystem format option &#39;addressing logical&#39;
I installed SVN 1.9.2 in UAT Linux redhat 6.6 using tarball and run the SVNSERVE as daemon and all went fine. Then I created a repository and configured the repo for client access and tried to access the repository using tortoiseSVN but could not access the repository. Seeing the error, "db/format contains invalid filesystem format option addressing logical" Before installing in UAT, I tried in TEST server but I could install and access the repository with no issues. I am using Redhat 6.6 Server. Anyone seen this issue. I am stuck since next week we have production installation. Edit: Actually I moved to SVN installed bin directory and started the svnserve as daemon. The svnserve started was the one which is shipped with Redhat OS. It solved the issue when invoked the svnserve with full path.
Subversion 1.9.2: Invalid filesystem format option &#39;addressing logical&#39; I installed SVN 1.9.2 in UAT Linux redhat 6.6 using tarball and run the SVNSERVE as daemon and all went fine. Then I created a repository and configured the repo for client access and tried to access the repository using tortoiseSVN but could not access the repository. Seeing the error, "db/format contains invalid filesystem format option addressing logical" Before installing in UAT, I tried in TEST server but I could install and access the repository with no issues. I am using Redhat 6.6 Server. Anyone seen this issue. I am stuck since next week we have production installation. Edit: Actually I moved to SVN installed bin directory and started the svnserve as daemon. The svnserve started was the one which is shipped with Redhat OS. It solved the issue when invoked the svnserve with full path.
svn, tortoisesvn, redhat, svnserve, fsfs
6
13,341
2
https://stackoverflow.com/questions/33257448/subversion-1-9-2-invalid-filesystem-format-option-addressing-logical
20,554,806
correct use of linux inotify - reopen every time?
I'm debugging a system load problem that a customer encounters on their production system and they've made a test application that simulates the load to reproduce the problem: In this particular workload, one of the things the coder did was to: while(1) initialize inotify watch a directory for events receive event process event remove watch close inotify fd Strangely enough, the high system load comes from the close() of the inotify fd: inotify_init() = 4 <0.000020> inotify_add_watch(4, "/mnt/tmp/msys_sim/QUEUES/Child_032", IN_CREATE) = 1 <0.059537> write(1, "Child [032] sleeping\n", 21) = 21 <0.000012> read(4, "\1\0\0\0\0\1\0\0\0\0\0\0\20\0\0\0SrcFile.b8tlfT\0\0", 512) = 32 <0.231012> inotify_rm_watch(4, 1) = 0 <0.000044> close(4) = 0 <0.702530> open("/mnt/tmp/msys_sim/QUEUES/Child_032", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 4 <0.000031> lseek(4, 0, SEEK_SET) = 0 <0.000010> getdents(4, /* 3 entries */, 32768) = 88 <0.000048> getdents(4, /* 0 entries */, 32768) = 0 <0.000009> write(1, "Child [032] dequeue [SrcFile.b8t"..., 37) = 37 <0.000011> unlink("/mnt/tmp/msys_sim/QUEUES/Child_032/SrcFile.b8tlfT") = 0 <0.059298> lseek(4, 0, SEEK_SET) = 0 <0.000011> getdents(4, /* 2 entries */, 32768) = 48 <0.000038> getdents(4, /* 0 entries */, 32768) = 0 <0.000009> close(4) = 0 <0.000012> inotify_init() = 4 <0.000020> inotify_add_watch(4, "/mnt/tmp/msys_sim/QUEUES/Child_032", IN_CREATE) = 1 <0.040385> write(1, "Child [032] sleeping\n", 21) = 21 <0.000903> read(4, "\1\0\0\0\0\1\0\0\0\0\0\0\20\0\0\0SrcFile.mQgUSh\0\0", 512) = 32 <0.023423> inotify_rm_watch(4, 1) = 0 <0.000012> close(4) = 0 <0.528736> What could possibly be causing the close() call to take such an enormous amount of time? I can identify two possible things: closing and reinitializing inotify every time There are 256K files (flat) in /mnt/tmp/msys_sim/SOURCES and a particular file in /mnt/tmp/msys_sim/QUEUES/Child_032 is hardlinked to one in that directory. But SOURCES is never opened by the above process Is it an artifact of using inotify wrong? What can I point at to say "What you're doing is WRONG!"? Output of perf top (I had been looking for this!) Events: 109K cycles 70.01% [kernel] [k] _spin_lock 24.30% [kernel] [k] __fsnotify_update_child_dentry_flags 2.24% [kernel] [k] _spin_unlock_irqrestore 0.64% [kernel] [k] __do_softirq 0.60% [kernel] [k] __rcu_process_callbacks 0.46% [kernel] [k] run_timer_softirq 0.40% [kernel] [k] rcu_process_gp_end Sweet! I suspect a spinlock somewhere and the entire system goes highly latent when this happens.
correct use of linux inotify - reopen every time? I'm debugging a system load problem that a customer encounters on their production system and they've made a test application that simulates the load to reproduce the problem: In this particular workload, one of the things the coder did was to: while(1) initialize inotify watch a directory for events receive event process event remove watch close inotify fd Strangely enough, the high system load comes from the close() of the inotify fd: inotify_init() = 4 <0.000020> inotify_add_watch(4, "/mnt/tmp/msys_sim/QUEUES/Child_032", IN_CREATE) = 1 <0.059537> write(1, "Child [032] sleeping\n", 21) = 21 <0.000012> read(4, "\1\0\0\0\0\1\0\0\0\0\0\0\20\0\0\0SrcFile.b8tlfT\0\0", 512) = 32 <0.231012> inotify_rm_watch(4, 1) = 0 <0.000044> close(4) = 0 <0.702530> open("/mnt/tmp/msys_sim/QUEUES/Child_032", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 4 <0.000031> lseek(4, 0, SEEK_SET) = 0 <0.000010> getdents(4, /* 3 entries */, 32768) = 88 <0.000048> getdents(4, /* 0 entries */, 32768) = 0 <0.000009> write(1, "Child [032] dequeue [SrcFile.b8t"..., 37) = 37 <0.000011> unlink("/mnt/tmp/msys_sim/QUEUES/Child_032/SrcFile.b8tlfT") = 0 <0.059298> lseek(4, 0, SEEK_SET) = 0 <0.000011> getdents(4, /* 2 entries */, 32768) = 48 <0.000038> getdents(4, /* 0 entries */, 32768) = 0 <0.000009> close(4) = 0 <0.000012> inotify_init() = 4 <0.000020> inotify_add_watch(4, "/mnt/tmp/msys_sim/QUEUES/Child_032", IN_CREATE) = 1 <0.040385> write(1, "Child [032] sleeping\n", 21) = 21 <0.000903> read(4, "\1\0\0\0\0\1\0\0\0\0\0\0\20\0\0\0SrcFile.mQgUSh\0\0", 512) = 32 <0.023423> inotify_rm_watch(4, 1) = 0 <0.000012> close(4) = 0 <0.528736> What could possibly be causing the close() call to take such an enormous amount of time? I can identify two possible things: closing and reinitializing inotify every time There are 256K files (flat) in /mnt/tmp/msys_sim/SOURCES and a particular file in /mnt/tmp/msys_sim/QUEUES/Child_032 is hardlinked to one in that directory. But SOURCES is never opened by the above process Is it an artifact of using inotify wrong? What can I point at to say "What you're doing is WRONG!"? Output of perf top (I had been looking for this!) Events: 109K cycles 70.01% [kernel] [k] _spin_lock 24.30% [kernel] [k] __fsnotify_update_child_dentry_flags 2.24% [kernel] [k] _spin_unlock_irqrestore 0.64% [kernel] [k] __do_softirq 0.60% [kernel] [k] __rcu_process_callbacks 0.46% [kernel] [k] run_timer_softirq 0.40% [kernel] [k] rcu_process_gp_end Sweet! I suspect a spinlock somewhere and the entire system goes highly latent when this happens.
linux, redhat, inotify
6
1,807
3
https://stackoverflow.com/questions/20554806/correct-use-of-linux-inotify-reopen-every-time
37,385,554
python No module named ujson, while it&#39;s already installed
I've installed ujson using command pip install ujson and when I've tried to run my python project it returns ImportError: No module named ujson OS version: Red Hat Enterprise Linux Server release 7.2 (Maipo) Python version: Python 2.7.6 pip list: ujson (1.35) Any help please?
python No module named ujson, while it&#39;s already installed I've installed ujson using command pip install ujson and when I've tried to run my python project it returns ImportError: No module named ujson OS version: Red Hat Enterprise Linux Server release 7.2 (Maipo) Python version: Python 2.7.6 pip list: ujson (1.35) Any help please?
python, linux, python-2.7, redhat, ujson
6
19,204
1
https://stackoverflow.com/questions/37385554/python-no-module-named-ujson-while-its-already-installed
33,735,981
Java in Linux - different look and feel classes for root and non-root
I noticed that Java proposes different look and feel classes for root and non-root users. I am trying to understand how to make LAF consistent. Moreover, it's inconsistent even within a user/root: depends on how user/root logged in: Sample code (compiled and packaged in laf.jar ): import javax.swing.UIManager; public class laf { public static void main(java.lang.String[] args) { try { System.out.print(UIManager.getSystemLookAndFeelClassName()); } catch (Exception e) { } } } Scenario 1 Logs in to machine (in GUI mode) as a regular user Sample output (as user ) [xxx@yyy Downloads]$ java -classpath laf.jar laf com.sun.java.swing.plaf.gtk.GTKLookAndFeel Sample output (switch to root via su ) [root@yyy Downloads]# java -classpath ./laf.jar laf javax.swing.plaf.metal.MetalLookAndFeel Scenario 2 Logs in to machine (in GUI mode) as root Sample output (as root ) [root@yyy Downloads]# java -classpath ./laf.jar laf com.sun.java.swing.plaf.gtk.GTKLookAndFeel Scenario 3 Logs in to machine via SSH as a regular user (similar as scenario #1 above, but in this case - same LAF) Sample output (as user ) [xxx@yyy Downloads]$ java -classpath laf.jar laf javax.swing.plaf.metal.MetalLookAndFeel Sample output (switch to root ) [root@yyy Downloads]# java -classpath ./laf.jar laf javax.swing.plaf.metal.MetalLookAndFeel Software versions: [root@yyy Downloads]# java -version java version "1.7.0" Java(TM) SE Runtime Environment (build pxa6470sr9fp10-20150708_01(SR9 FP10)) IBM J9 VM (build 2.6, JRE 1.7.0 Linux amd64-64 Compressed References 20150701_255667 (JIT enabled, AOT enabled) J9VM - R26_Java726_SR9_20150701_0050_B255667 JIT - tr.r11_20150626_95120.01 GC - R26_Java726_SR9_20150701_0050_B255667_CMPRSS J9CL - 20150701_255667) JCL - 20150628_01 based on Oracle jdk7u85-b15 [root@yyy Downloads]# cat /etc/redhat-release Red Hat Enterprise Linux Workstation release 6.7 (Santiago)
Java in Linux - different look and feel classes for root and non-root I noticed that Java proposes different look and feel classes for root and non-root users. I am trying to understand how to make LAF consistent. Moreover, it's inconsistent even within a user/root: depends on how user/root logged in: Sample code (compiled and packaged in laf.jar ): import javax.swing.UIManager; public class laf { public static void main(java.lang.String[] args) { try { System.out.print(UIManager.getSystemLookAndFeelClassName()); } catch (Exception e) { } } } Scenario 1 Logs in to machine (in GUI mode) as a regular user Sample output (as user ) [xxx@yyy Downloads]$ java -classpath laf.jar laf com.sun.java.swing.plaf.gtk.GTKLookAndFeel Sample output (switch to root via su ) [root@yyy Downloads]# java -classpath ./laf.jar laf javax.swing.plaf.metal.MetalLookAndFeel Scenario 2 Logs in to machine (in GUI mode) as root Sample output (as root ) [root@yyy Downloads]# java -classpath ./laf.jar laf com.sun.java.swing.plaf.gtk.GTKLookAndFeel Scenario 3 Logs in to machine via SSH as a regular user (similar as scenario #1 above, but in this case - same LAF) Sample output (as user ) [xxx@yyy Downloads]$ java -classpath laf.jar laf javax.swing.plaf.metal.MetalLookAndFeel Sample output (switch to root ) [root@yyy Downloads]# java -classpath ./laf.jar laf javax.swing.plaf.metal.MetalLookAndFeel Software versions: [root@yyy Downloads]# java -version java version "1.7.0" Java(TM) SE Runtime Environment (build pxa6470sr9fp10-20150708_01(SR9 FP10)) IBM J9 VM (build 2.6, JRE 1.7.0 Linux amd64-64 Compressed References 20150701_255667 (JIT enabled, AOT enabled) J9VM - R26_Java726_SR9_20150701_0050_B255667 JIT - tr.r11_20150626_95120.01 GC - R26_Java726_SR9_20150701_0050_B255667_CMPRSS J9CL - 20150701_255667) JCL - 20150628_01 based on Oracle jdk7u85-b15 [root@yyy Downloads]# cat /etc/redhat-release Red Hat Enterprise Linux Workstation release 6.7 (Santiago)
java, linux, redhat, look-and-feel
6
2,018
2
https://stackoverflow.com/questions/33735981/java-in-linux-different-look-and-feel-classes-for-root-and-non-root
46,738,150
OpenShift and hostnetwork=true
I have deployed two POD-s with hostnetwork set to true. When the POD-s are deployed on same OpenShfit node then everything works fine since they can discover each other using node IP. When the POD-s are deployed on different OpenShift nodes then they cant discover each other, I get no route to host if I want to point one POD to another using node IP. How to fix this?
OpenShift and hostnetwork=true I have deployed two POD-s with hostnetwork set to true. When the POD-s are deployed on same OpenShfit node then everything works fine since they can discover each other using node IP. When the POD-s are deployed on different OpenShift nodes then they cant discover each other, I get no route to host if I want to point one POD to another using node IP. How to fix this?
kubernetes, openshift, redhat
6
3,881
3
https://stackoverflow.com/questions/46738150/openshift-and-hostnetwork-true
10,683,834
why firefox won&#39;t start up under selenium 2 webdriver on redhat 5.6
I was wondering if anyone has any ideas on how I could find out why I can seem to get firefox running through selenium webdriver. What happens is when I run: self.driver=webdriver.Firefox() I get a blank dialogue on my desktop. I am running on Redhat 5.6 and my selenium version is 2.21.3. I debugged the code as far as i can go and from what i can determine the code freezes after bringing up the blank dialog on the following code within the firefox_binary module: Popen([self._start_cmd, "-slient"], stdout=PIPE, stderr=STDOUT, env=self._filefox_env).wait() I opened up a cmd prompt and manually ran the abovementioned command and no such blank dialog appears. This would make me think that its not a firefox error. I can not find where the error for this would appear. Any ideas? update I installed centos 6 and installed firefox 10.0.6 and selenium webdriver worked with that version update Aside from using centos 6 I need this problem to also be solved on redhat so here are more details and what I've found. I will put a bounty on this as it needs to be solved: I dug a little more on this and found that the problem is with selenium using a 32 bit lib. I have selenium version 2.25.0 on Redhat Enterprise Linux Server release 5.6 (x86_64) using Firefox ESR 10.0.6 (64 bit). I changed the _start_from_profile_path method in the firefoxBinary class to see where the problem lies: p=open("/tmp/ffoutput.txt", "w+") Popen([self._start_cmd, "-silent"], stdout=p, stderr=STDOUT, env=self._firefox_env).communicate() and I tailed /tmp/ffoutput.txt I found that selenium is trying to use a 32 bit lib: Failed to dlopen /usr/lib/libX11.so.6 dlerror says: /usr/lib/libX11.so.6: wrong ELF class: ELFCLASS32 This message occurs continuously and firefox hangs with a blank dialog showing. I googled this problem and found some people complaining but no solutions that worked (I tried softlinking the 64 bit lib to the 32 bit lib dir after moving the 32 bit lib but this caused geko to crash, I tried sending the continuous errors to /dev/null but this solved nothing).
why firefox won&#39;t start up under selenium 2 webdriver on redhat 5.6 I was wondering if anyone has any ideas on how I could find out why I can seem to get firefox running through selenium webdriver. What happens is when I run: self.driver=webdriver.Firefox() I get a blank dialogue on my desktop. I am running on Redhat 5.6 and my selenium version is 2.21.3. I debugged the code as far as i can go and from what i can determine the code freezes after bringing up the blank dialog on the following code within the firefox_binary module: Popen([self._start_cmd, "-slient"], stdout=PIPE, stderr=STDOUT, env=self._filefox_env).wait() I opened up a cmd prompt and manually ran the abovementioned command and no such blank dialog appears. This would make me think that its not a firefox error. I can not find where the error for this would appear. Any ideas? update I installed centos 6 and installed firefox 10.0.6 and selenium webdriver worked with that version update Aside from using centos 6 I need this problem to also be solved on redhat so here are more details and what I've found. I will put a bounty on this as it needs to be solved: I dug a little more on this and found that the problem is with selenium using a 32 bit lib. I have selenium version 2.25.0 on Redhat Enterprise Linux Server release 5.6 (x86_64) using Firefox ESR 10.0.6 (64 bit). I changed the _start_from_profile_path method in the firefoxBinary class to see where the problem lies: p=open("/tmp/ffoutput.txt", "w+") Popen([self._start_cmd, "-silent"], stdout=p, stderr=STDOUT, env=self._firefox_env).communicate() and I tailed /tmp/ffoutput.txt I found that selenium is trying to use a 32 bit lib: Failed to dlopen /usr/lib/libX11.so.6 dlerror says: /usr/lib/libX11.so.6: wrong ELF class: ELFCLASS32 This message occurs continuously and firefox hangs with a blank dialog showing. I googled this problem and found some people complaining but no solutions that worked (I tried softlinking the 64 bit lib to the 32 bit lib dir after moving the 32 bit lib but this caused geko to crash, I tried sending the continuous errors to /dev/null but this solved nothing).
firefox, selenium, webdriver, redhat
6
1,623
1
https://stackoverflow.com/questions/10683834/why-firefox-wont-start-up-under-selenium-2-webdriver-on-redhat-5-6
47,672,776
How do I clear a thinpool device for docker
I am running docker on a Redhat system with devicemapper and thinpool device just as recommended for production systems. Now when I want to reinstall docker I need two steps: 1) remove docker directory (in my case /area51/docker) 2) clear thinpool device The docker documentation states that when using devicemapper with dm.metadev and dm.datadev options, the easiest way of cleaning devicemapper would be: If setting up a new metadata pool it is required to be valid. This can be achieved by zeroing the first 4k to indicate empty metadata, like this: $ dd if=/dev/zero of=$metadata_dev bs=4096 count=1 Unfortunately, according to the documentation, the dm.metadatadev is deprecated, it says to use dm.thinpooldev instead. My thinpool has been created along the lines of this docker instruction So, my setup now looks like this: cat /etc/docker/daemon.json { "storage-driver": "devicemapper", "storage-opts": [ "dm.thinpooldev=/dev/mapper/thinpool_VG_38401-thinpool", "dm.basesize=18G" ] } Under the devicemapper directory i see the following thinpool devices ls -l /dev/mapper/thinpool_VG_38401-thinpool* lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool -> ../dm-8 lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tdata -> ../dm-7 lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tmeta -> ../dm-6 So, after running docker successfully I tried to reinstall as described above and clear the thinpool by writing 4K zeroes into the tmeta device and restart docker: dd if=/dev/zero of=/dev/mapper/thinpool_VG_38401-thinpool_tmeta bs=4096 count=1 systemctl start docker And endet up with docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2017-12-06 10:28:46 UTC; 10s ago Docs: [URL] Process: 1566 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE) Main PID: 1566 (code=exited, status=1/FAILURE) Memory: 236.0K CGroup: /system.slice/docker.service Dec 06 10:28:45 yoda3 systemd[1]: Starting Docker Application Container Engine... Dec 06 10:28:45 yoda3 dockerd[1566]: time="2017-12-06T10:28:45.816049000Z" level=info msg="libcontainerd: new containerd process, pid: 1577" Dec 06 10:28:46 yoda3 dockerd[1566]: time="2017-12-06T10:28:46.816966000Z" level=warning msg="failed to rename /area51/docker/tmp for background deletion: renam...chronously" Dec 06 10:28:46 yoda3 dockerd[1566]: Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (thinpool_VG_38401-...data blocks Dec 06 10:28:46 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Dec 06 10:28:46 yoda3 systemd[1]: Failed to start Docker Application Container Engine. Dec 06 10:28:46 yoda3 systemd[1]: Unit docker.service entered failed state. Dec 06 10:28:46 yoda3 systemd[1]: docker.service failed. I assumed I could get around the 'unable to take ownership of thin-pool' by doing a reboot. But after reboot and trying to start docker again I got the following error: systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2017-12-06 10:30:37 UTC; 2min 29s ago Docs: [URL] Process: 3180 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE) Main PID: 3180 (code=exited, status=1/FAILURE) Memory: 37.9M CGroup: /system.slice/docker.service Dec 06 10:30:36 yoda3 systemd[1]: Starting Docker Application Container Engine... Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.893777000Z" level=warning msg="libcontainerd: makeUpgradeProof could not open /var/run/docker/lib...containerd" Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.901958000Z" level=info msg="libcontainerd: new containerd process, pid: 3224" Dec 06 10:30:37 yoda3 dockerd[3180]: Error starting daemon: error initializing graphdriver: devicemapper: Non existing device thinpool_VG_38401-thinpool Dec 06 10:30:37 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Dec 06 10:30:37 yoda3 systemd[1]: Failed to start Docker Application Container Engine. Dec 06 10:30:37 yoda3 systemd[1]: Unit docker.service entered failed state. Dec 06 10:30:37 yoda3 systemd[1]: docker.service failed. So, obviously writing zeroes into the thinpool_meta device is not the right thing to do, it seems to destroy my thinpool device. Anyone here that can tell me the right steps to clear the thin-pool device? Preferably the solution should not require a reboot.
How do I clear a thinpool device for docker I am running docker on a Redhat system with devicemapper and thinpool device just as recommended for production systems. Now when I want to reinstall docker I need two steps: 1) remove docker directory (in my case /area51/docker) 2) clear thinpool device The docker documentation states that when using devicemapper with dm.metadev and dm.datadev options, the easiest way of cleaning devicemapper would be: If setting up a new metadata pool it is required to be valid. This can be achieved by zeroing the first 4k to indicate empty metadata, like this: $ dd if=/dev/zero of=$metadata_dev bs=4096 count=1 Unfortunately, according to the documentation, the dm.metadatadev is deprecated, it says to use dm.thinpooldev instead. My thinpool has been created along the lines of this docker instruction So, my setup now looks like this: cat /etc/docker/daemon.json { "storage-driver": "devicemapper", "storage-opts": [ "dm.thinpooldev=/dev/mapper/thinpool_VG_38401-thinpool", "dm.basesize=18G" ] } Under the devicemapper directory i see the following thinpool devices ls -l /dev/mapper/thinpool_VG_38401-thinpool* lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool -> ../dm-8 lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tdata -> ../dm-7 lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tmeta -> ../dm-6 So, after running docker successfully I tried to reinstall as described above and clear the thinpool by writing 4K zeroes into the tmeta device and restart docker: dd if=/dev/zero of=/dev/mapper/thinpool_VG_38401-thinpool_tmeta bs=4096 count=1 systemctl start docker And endet up with docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2017-12-06 10:28:46 UTC; 10s ago Docs: [URL] Process: 1566 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE) Main PID: 1566 (code=exited, status=1/FAILURE) Memory: 236.0K CGroup: /system.slice/docker.service Dec 06 10:28:45 yoda3 systemd[1]: Starting Docker Application Container Engine... Dec 06 10:28:45 yoda3 dockerd[1566]: time="2017-12-06T10:28:45.816049000Z" level=info msg="libcontainerd: new containerd process, pid: 1577" Dec 06 10:28:46 yoda3 dockerd[1566]: time="2017-12-06T10:28:46.816966000Z" level=warning msg="failed to rename /area51/docker/tmp for background deletion: renam...chronously" Dec 06 10:28:46 yoda3 dockerd[1566]: Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (thinpool_VG_38401-...data blocks Dec 06 10:28:46 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Dec 06 10:28:46 yoda3 systemd[1]: Failed to start Docker Application Container Engine. Dec 06 10:28:46 yoda3 systemd[1]: Unit docker.service entered failed state. Dec 06 10:28:46 yoda3 systemd[1]: docker.service failed. I assumed I could get around the 'unable to take ownership of thin-pool' by doing a reboot. But after reboot and trying to start docker again I got the following error: systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2017-12-06 10:30:37 UTC; 2min 29s ago Docs: [URL] Process: 3180 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE) Main PID: 3180 (code=exited, status=1/FAILURE) Memory: 37.9M CGroup: /system.slice/docker.service Dec 06 10:30:36 yoda3 systemd[1]: Starting Docker Application Container Engine... Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.893777000Z" level=warning msg="libcontainerd: makeUpgradeProof could not open /var/run/docker/lib...containerd" Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.901958000Z" level=info msg="libcontainerd: new containerd process, pid: 3224" Dec 06 10:30:37 yoda3 dockerd[3180]: Error starting daemon: error initializing graphdriver: devicemapper: Non existing device thinpool_VG_38401-thinpool Dec 06 10:30:37 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Dec 06 10:30:37 yoda3 systemd[1]: Failed to start Docker Application Container Engine. Dec 06 10:30:37 yoda3 systemd[1]: Unit docker.service entered failed state. Dec 06 10:30:37 yoda3 systemd[1]: docker.service failed. So, obviously writing zeroes into the thinpool_meta device is not the right thing to do, it seems to destroy my thinpool device. Anyone here that can tell me the right steps to clear the thin-pool device? Preferably the solution should not require a reboot.
docker, redhat, device-mapper
6
1,923
0
https://stackoverflow.com/questions/47672776/how-do-i-clear-a-thinpool-device-for-docker
46,489,134
Keycloak logout endpoint not deleting session
Hello fellow programmes, I am stuck on the issue with keycloak. I am trying to send from node.js express framework request towards keycloak to logout the user. Config.keycloakClient = my_realm Config.keycloakURL = keycloak URL request.get({ //url: join(Config.keycloakURL, '/auth/realms/'+ Config.keycloakClient+ '/protocol/openid-connect/logout?' + 'id_token_hint='+req.headers.oidc_access_token), <--- tried this url: join(Config.keycloakURL, '/auth/realms/'+ Config.keycloakClient+ '/protocol/openid-connect/logout'), // <-- i also tried this headers: { Authorization: "Bearer " + req.headers.oidc_access_token, // <-- also tried Authorization: req.headers.oidc_access_token } Result - 200 OK, but i can still see active session in active sessions in admin interface request.post({ //url: join(Config.keycloakURL, '/auth/realms/'+ Config.keycloakClient+ '/protocol/openid-connect/logout?' + 'id_token_hint='+req.headers.oidc_access_token), <--- tried this url: join(Config.keycloakURL, '/auth/realms/'+ Config.keycloakClient+ '/protocol/openid-connect/logout'), // <-- i also tried this headers: { Authorization: "Bearer " + req.headers.oidc_access_token, // <-- also tried Authorization: req.headers.oidc_access_token } Result - 302 redirect, but i can still see active session in active sessions in admin interface I was trying to find the refresh token, but when accessing Config.keycloakURL/auth/realms/{realm} i could not get the refesh token-> it redirects me to the login page. In session storage / cookies i can not see anything strange via chrome dev tools. So what is the proper way to logout with endpoint? Which endpoint and what parameters should i use please? And how am i to obtain refresh token? Thanks for the help! Best regards
Keycloak logout endpoint not deleting session Hello fellow programmes, I am stuck on the issue with keycloak. I am trying to send from node.js express framework request towards keycloak to logout the user. Config.keycloakClient = my_realm Config.keycloakURL = keycloak URL request.get({ //url: join(Config.keycloakURL, '/auth/realms/'+ Config.keycloakClient+ '/protocol/openid-connect/logout?' + 'id_token_hint='+req.headers.oidc_access_token), <--- tried this url: join(Config.keycloakURL, '/auth/realms/'+ Config.keycloakClient+ '/protocol/openid-connect/logout'), // <-- i also tried this headers: { Authorization: "Bearer " + req.headers.oidc_access_token, // <-- also tried Authorization: req.headers.oidc_access_token } Result - 200 OK, but i can still see active session in active sessions in admin interface request.post({ //url: join(Config.keycloakURL, '/auth/realms/'+ Config.keycloakClient+ '/protocol/openid-connect/logout?' + 'id_token_hint='+req.headers.oidc_access_token), <--- tried this url: join(Config.keycloakURL, '/auth/realms/'+ Config.keycloakClient+ '/protocol/openid-connect/logout'), // <-- i also tried this headers: { Authorization: "Bearer " + req.headers.oidc_access_token, // <-- also tried Authorization: req.headers.oidc_access_token } Result - 302 redirect, but i can still see active session in active sessions in admin interface I was trying to find the refresh token, but when accessing Config.keycloakURL/auth/realms/{realm} i could not get the refesh token-> it redirects me to the login page. In session storage / cookies i can not see anything strange via chrome dev tools. So what is the proper way to logout with endpoint? Which endpoint and what parameters should i use please? And how am i to obtain refresh token? Thanks for the help! Best regards
session, jboss, redhat, keycloak, refresh-token
6
5,742
0
https://stackoverflow.com/questions/46489134/keycloak-logout-endpoint-not-deleting-session
38,992,252
Out of memory exception on java with linux-redhat
I am facing Outof memory issue on linux/redhat, and same program works on my windows machine. My linux machine configuration is 15Gb RAM. import java.awt.image.BufferedImage; import java.io.File; import java.io.FileOutputStream; import java.io.InputStream; import java.io.OutputStream; import java.net.URL; import java.sql.ResultSet; import javax.imageio.ImageIO; import javax.swing.ImageIcon; /** * * @author ndoshi */ public class Dwnld { BufferedImage bi8 = null, bi16 = null; ImageIcon ii = null; ResultSet rs, rsDwnld; String OG = "ogImage\\"; String CROP8 = "Crop8\\"; String CROP16 = "Crop16\\"; String TIME = "", ErrorLog = "", ErrorLogPro = ""; int hashInc8 = 0; int hashInc16 = 0; int totalround = 0; int countProcess = 0; boolean download_new = false; private int row = 0; int Dwnld = 0, NotDwnld = 0; final String OP_Log = "Log", OP_Error = "ErrorLog", OP_ErrorPro = "ErrorLogProcess"; int r, g, b, k, ih, j; int sr = 0, sg = 0, sb = 0, sk = 0; int rg, gg, bg, kg; String s = "", s1 = "", hash16, hash8; /** * @param args the command line arguments */ public static void main(String[] args) { new Dwnld(); } public Dwnld(){ try { BufferedImage image = null; InputStream is = null; OutputStream os = null; URL url = new URL("[URL] is = url.openStream(); os = new FileOutputStream(OG + "1.jpg"); byte[] b = new byte[2048]; int length; while ((length = is.read(b)) != -1) { os.write(b, 0, length); } image = ImageIO.read(new File(OG + "1.jpg")); is.close(); os.close(); System.out.println("Hash 16 = "+hash16); System.out.println("Hash 8 = "+hash8); } catch (Exception ex) { System.out.println(ex.getMessage()); } } } I am running the sam eby increasing the memory with XMS & XMX as java -Xms2048m -Xmx6096m Dwnld Error am getting : Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1056) at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343) at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:563) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254) at java.net.URL.openStream(URL.java:1037) at Dwnld.<init>(Dwnld.java:53) at Dwnld.main(Dwnld.java:43)
Out of memory exception on java with linux-redhat I am facing Outof memory issue on linux/redhat, and same program works on my windows machine. My linux machine configuration is 15Gb RAM. import java.awt.image.BufferedImage; import java.io.File; import java.io.FileOutputStream; import java.io.InputStream; import java.io.OutputStream; import java.net.URL; import java.sql.ResultSet; import javax.imageio.ImageIO; import javax.swing.ImageIcon; /** * * @author ndoshi */ public class Dwnld { BufferedImage bi8 = null, bi16 = null; ImageIcon ii = null; ResultSet rs, rsDwnld; String OG = "ogImage\\"; String CROP8 = "Crop8\\"; String CROP16 = "Crop16\\"; String TIME = "", ErrorLog = "", ErrorLogPro = ""; int hashInc8 = 0; int hashInc16 = 0; int totalround = 0; int countProcess = 0; boolean download_new = false; private int row = 0; int Dwnld = 0, NotDwnld = 0; final String OP_Log = "Log", OP_Error = "ErrorLog", OP_ErrorPro = "ErrorLogProcess"; int r, g, b, k, ih, j; int sr = 0, sg = 0, sb = 0, sk = 0; int rg, gg, bg, kg; String s = "", s1 = "", hash16, hash8; /** * @param args the command line arguments */ public static void main(String[] args) { new Dwnld(); } public Dwnld(){ try { BufferedImage image = null; InputStream is = null; OutputStream os = null; URL url = new URL("[URL] is = url.openStream(); os = new FileOutputStream(OG + "1.jpg"); byte[] b = new byte[2048]; int length; while ((length = is.read(b)) != -1) { os.write(b, 0, length); } image = ImageIO.read(new File(OG + "1.jpg")); is.close(); os.close(); System.out.println("Hash 16 = "+hash16); System.out.println("Hash 8 = "+hash8); } catch (Exception ex) { System.out.println(ex.getMessage()); } } } I am running the sam eby increasing the memory with XMS & XMX as java -Xms2048m -Xmx6096m Dwnld Error am getting : Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1056) at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343) at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:563) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254) at java.net.URL.openStream(URL.java:1037) at Dwnld.<init>(Dwnld.java:53) at Dwnld.main(Dwnld.java:43)
java, linux, out-of-memory, redhat
6
728
1
https://stackoverflow.com/questions/38992252/out-of-memory-exception-on-java-with-linux-redhat
11,436,537
Curl &amp; Wget returning response, Browser times out
I am sending requests to a specific server on a cloud: wget --header="Host: example.com" [URL] curl -i -H"Host: example.com" [URL] And it returns exactly as expected (a simple static file). However, when I try and access it in a browser, the request times out. I can't imagine it would be a user agent header issue, but then again, I don't really know what else it would be. It isn't going to a load balancer or anything, should be going directly to the site. Any ideas on why this might be happening? I have my hosts file set to go to that specific IP address. Thanks
Curl &amp; Wget returning response, Browser times out I am sending requests to a specific server on a cloud: wget --header="Host: example.com" [URL] curl -i -H"Host: example.com" [URL] And it returns exactly as expected (a simple static file). However, when I try and access it in a browser, the request times out. I can't imagine it would be a user agent header issue, but then again, I don't really know what else it would be. It isn't going to a load balancer or anything, should be going directly to the site. Any ideas on why this might be happening? I have my hosts file set to go to that specific IP address. Thanks
linux, browser, timeout, wget, redhat
6
1,892
1
https://stackoverflow.com/questions/11436537/curl-wget-returning-response-browser-times-out
33,241,045
Address Out of bounds error when reading xml
I am getting a weird segfault when using libxml to parse a file. This code worked previously when I compiled it as a 32bit application. I changed it to a 64 bit application and it stops working. The seg fault comes in at "if (xmlStrcmp(cur->name, (const xmlChar *) "servers"))" cur->name is a const xmlChar * and it points to an address that says its out out bounds. But when I debug and go to that memory location, that data is correct. int XmlGetServers() { xmlDocPtr doc; xmlNodePtr cur; doc = xmlParseFile("Pin.xml"); if (doc == NULL) { std::cout << "\n Pin.xml not parsed successfully." << std::endl; return -1; } cur = xmlDocGetRootElement(doc); if (cur == NULL) { std::cout << "\n Pin.xml is empty document." << std::endl; xmlFreeDoc(doc); return -1; } if (xmlStrcmp(cur->name, (const xmlChar *) "servers")) { std::cout << "\n ERROR: Pin.xml of the wrong type, root node != servers." << std::endl; xmlFreeDoc(doc); return -1; } } Before cur is initialized the name parameter is Name : name Details:0xed11f72000007fff <Address 0xed11f72000007fff out of bounds> After cur is initialized the name parameter is Name : name Details:0x64c43000000000 <Address 0x64c43000000000 out of bounds> Referenced XML file <?xml version="1.0"?> <servers> <server_info> <server_name>Server1</server_name> <server_ip>127.0.0.1</server_ip> <server_data_port>9000</server_data_port> </server_info> <server_info> <server_name>Server2</server_name> <server_ip>127.0.0.1</server_ip> <server_data_port>9001</server_data_port> </server_info> </servers> System: OS: Redhat Enterprise Linux 6.4 64-bit GCC: 4.4.7-3 packages: libxml2-2.7.6-8.el6_3.4.x86_64
Address Out of bounds error when reading xml I am getting a weird segfault when using libxml to parse a file. This code worked previously when I compiled it as a 32bit application. I changed it to a 64 bit application and it stops working. The seg fault comes in at "if (xmlStrcmp(cur->name, (const xmlChar *) "servers"))" cur->name is a const xmlChar * and it points to an address that says its out out bounds. But when I debug and go to that memory location, that data is correct. int XmlGetServers() { xmlDocPtr doc; xmlNodePtr cur; doc = xmlParseFile("Pin.xml"); if (doc == NULL) { std::cout << "\n Pin.xml not parsed successfully." << std::endl; return -1; } cur = xmlDocGetRootElement(doc); if (cur == NULL) { std::cout << "\n Pin.xml is empty document." << std::endl; xmlFreeDoc(doc); return -1; } if (xmlStrcmp(cur->name, (const xmlChar *) "servers")) { std::cout << "\n ERROR: Pin.xml of the wrong type, root node != servers." << std::endl; xmlFreeDoc(doc); return -1; } } Before cur is initialized the name parameter is Name : name Details:0xed11f72000007fff <Address 0xed11f72000007fff out of bounds> After cur is initialized the name parameter is Name : name Details:0x64c43000000000 <Address 0x64c43000000000 out of bounds> Referenced XML file <?xml version="1.0"?> <servers> <server_info> <server_name>Server1</server_name> <server_ip>127.0.0.1</server_ip> <server_data_port>9000</server_data_port> </server_info> <server_info> <server_name>Server2</server_name> <server_ip>127.0.0.1</server_ip> <server_data_port>9001</server_data_port> </server_info> </servers> System: OS: Redhat Enterprise Linux 6.4 64-bit GCC: 4.4.7-3 packages: libxml2-2.7.6-8.el6_3.4.x86_64
c++, xml, linux, xml-parsing, redhat
5
601
2
https://stackoverflow.com/questions/33241045/address-out-of-bounds-error-when-reading-xml
46,711,597
Exception using stix-fonts with openjdk?
Problem happens while i try to create SXSSFWorkbook . Exception stacktrace : java.lang.ArrayIndexOutOfBoundsException: 0 at sun.font.CompositeFont.getSlotFont(CompositeFont.java:351) at sun.font.CompositeGlyphMapper.initMapper(CompositeGlyphMapper.java:81) at sun.font.CompositeGlyphMapper.<init>(CompositeGlyphMapper.java:62) at sun.font.CompositeFont.getMapper(CompositeFont.java:409) at sun.font.CompositeFont.canDisplay(CompositeFont.java:435) at java.awt.Font.canDisplayUpTo(Font.java:2063) at java.awt.font.TextLayout.singleFont(TextLayout.java:470) at java.awt.font.TextLayout.<init>(TextLayout.java:531) at FontTest.main(FontTest.java:15) gradle file : compile 'org.apache.poi:poi:3.14' compile 'org.apache.poi:poi-ooxml:3.14' Environment : openjdk version "1.8.0_141" RedHat 7.4 wildfly 10.0.0
Exception using stix-fonts with openjdk? Problem happens while i try to create SXSSFWorkbook . Exception stacktrace : java.lang.ArrayIndexOutOfBoundsException: 0 at sun.font.CompositeFont.getSlotFont(CompositeFont.java:351) at sun.font.CompositeGlyphMapper.initMapper(CompositeGlyphMapper.java:81) at sun.font.CompositeGlyphMapper.<init>(CompositeGlyphMapper.java:62) at sun.font.CompositeFont.getMapper(CompositeFont.java:409) at sun.font.CompositeFont.canDisplay(CompositeFont.java:435) at java.awt.Font.canDisplayUpTo(Font.java:2063) at java.awt.font.TextLayout.singleFont(TextLayout.java:470) at java.awt.font.TextLayout.<init>(TextLayout.java:531) at FontTest.main(FontTest.java:15) gradle file : compile 'org.apache.poi:poi:3.14' compile 'org.apache.poi:poi-ooxml:3.14' Environment : openjdk version "1.8.0_141" RedHat 7.4 wildfly 10.0.0
java, apache-poi, redhat
5
4,337
1
https://stackoverflow.com/questions/46711597/exception-using-stix-fonts-with-openjdk
14,013,514
How to extract a ZIP file that has a password using only PHP?
I have seen only one question on here but it does not answer my question. I am running a typical LAMP server that has the most up to date PHP 5 and MYSQL 5 with Redhat Linux. I need to find a PHP only solution because my host does not allow me to use shell. Here is my code that extracts ZIPs that are not passworded from vBulletin uploads to another directory: if ($_GET['add'] == TRUE){ $zip = new ZipArchive; $res = $zip->open($SOURCE FOLDER); if ($res === TRUE) { $zip->extractTo('$DESTINATION FOLDER/'); $zip->close(); echo 'File has been added to the library successfuly'; //Add a flag to that file to indicate it has already been added to the library. mysql_query("UPDATE attachment SET library = 1 WHERE filedataid='$fileid'"); } else { echo 'A uncompression or file error has occured'; }} There must be some way to do this using just PHP, surely! Thank you. UPDATE: My host informs me that gzip is installed on the server but not 7-Zip. I am looking into shell access too.
How to extract a ZIP file that has a password using only PHP? I have seen only one question on here but it does not answer my question. I am running a typical LAMP server that has the most up to date PHP 5 and MYSQL 5 with Redhat Linux. I need to find a PHP only solution because my host does not allow me to use shell. Here is my code that extracts ZIPs that are not passworded from vBulletin uploads to another directory: if ($_GET['add'] == TRUE){ $zip = new ZipArchive; $res = $zip->open($SOURCE FOLDER); if ($res === TRUE) { $zip->extractTo('$DESTINATION FOLDER/'); $zip->close(); echo 'File has been added to the library successfuly'; //Add a flag to that file to indicate it has already been added to the library. mysql_query("UPDATE attachment SET library = 1 WHERE filedataid='$fileid'"); } else { echo 'A uncompression or file error has occured'; }} There must be some way to do this using just PHP, surely! Thank you. UPDATE: My host informs me that gzip is installed on the server but not 7-Zip. I am looking into shell access too.
php, passwords, redhat, unzip
5
14,446
2
https://stackoverflow.com/questions/14013514/how-to-extract-a-zip-file-that-has-a-password-using-only-php
41,314,978
Can we git clone the redhat kernel source code and see the changes made by them?
I read in an article that the redhat takes the kernel from kernel.org for their releases and make some changes according to their requirement in that kernel and then they embeds that kernel in their upcoming releases. My question is that can we git clone the redhat kernel source code and see the changes made by them?
Can we git clone the redhat kernel source code and see the changes made by them? I read in an article that the redhat takes the kernel from kernel.org for their releases and make some changes according to their requirement in that kernel and then they embeds that kernel in their upcoming releases. My question is that can we git clone the redhat kernel source code and see the changes made by them?
linux, linux-kernel, linux-device-driver, redhat, rhel
5
11,874
2
https://stackoverflow.com/questions/41314978/can-we-git-clone-the-redhat-kernel-source-code-and-see-the-changes-made-by-them
70,798,261
How is Podman different from Docker?
I know that Docker and Kubernetes solve the same problem. Most users can simply alias Docker to Podman (alias docker=podman) without any problems. So what is the difference between them?
How is Podman different from Docker? I know that Docker and Kubernetes solve the same problem. Most users can simply alias Docker to Podman (alias docker=podman) without any problems. So what is the difference between them?
docker, cloud, devops, redhat, podman
5
5,410
2
https://stackoverflow.com/questions/70798261/how-is-podman-different-from-docker
39,107,168
How to get syslog file in Redhat
I have installed collectd on my Red Hat Enterprise Linux 7.2 server. I have also installed it on ubuntu 14.04 server. In ubuntu when I run the service collectd and face any error , I can easily go to /var/log/syslog to get the error message and reason. But when I get error message on my Red Hat server like this : and I go to /var/log I did not get the file syslog. As I don't have much/no experience with Red Hat , can some body tell me where to find syslog file in Red Hat server in order to trouble shoot my errors. Thank you.
How to get syslog file in Redhat I have installed collectd on my Red Hat Enterprise Linux 7.2 server. I have also installed it on ubuntu 14.04 server. In ubuntu when I run the service collectd and face any error , I can easily go to /var/log/syslog to get the error message and reason. But when I get error message on my Red Hat server like this : and I go to /var/log I did not get the file syslog. As I don't have much/no experience with Red Hat , can some body tell me where to find syslog file in Red Hat server in order to trouble shoot my errors. Thank you.
redhat
5
50,852
4
https://stackoverflow.com/questions/39107168/how-to-get-syslog-file-in-redhat
47,228,549
Log HAProxy custom header
I'm looking to get HAProxy to both set a custom header and log it. Below is a paired down example of my haproxy.cfg (I've left out some SSL details and multiple backends that I believe are not relevant to my problem) global log 127.0.0.1 local0 debug defaults log global stats enable option httplog frontend httpFrontendi mode http bind *:80 http-request add-header Foo Bar capture request header Foo len 64 log-format Foo\ %[capture.req.hdr(0)]\ %hr\ %hrl\ %hs\ %hsl default_backend backend_api redirect scheme https code 301 if !{ ssl_fc } backend backend_api mode http balance roundrobin option httpchk HEAD /api/test_db HTTP/1.0 server backend_api1 ip:80 check inter 5s rise 2 fall 3 I call the proxy with: curl 127.0.0.1 I was then expecting to see the custom header in the log, but it does not show: Nov 10 17:49:36 localhost haproxy[22355]: Foo - {} - The hardcoded "Foo" appears, so the log-format command is clearly working. But everything else renders as empty... are custom headers set after logging? How can one log a custom header? I am new to HAProxy so I think this may be some understanding I'm missing. (I start HAProxy with cmd sudo haproxy -f /etc/haproxy/haproxy.cfg and observe log with sudo tail -f /var/log/haproxy/haproxy.log . This is on HA-Proxy version 1.6.2)
Log HAProxy custom header I'm looking to get HAProxy to both set a custom header and log it. Below is a paired down example of my haproxy.cfg (I've left out some SSL details and multiple backends that I believe are not relevant to my problem) global log 127.0.0.1 local0 debug defaults log global stats enable option httplog frontend httpFrontendi mode http bind *:80 http-request add-header Foo Bar capture request header Foo len 64 log-format Foo\ %[capture.req.hdr(0)]\ %hr\ %hrl\ %hs\ %hsl default_backend backend_api redirect scheme https code 301 if !{ ssl_fc } backend backend_api mode http balance roundrobin option httpchk HEAD /api/test_db HTTP/1.0 server backend_api1 ip:80 check inter 5s rise 2 fall 3 I call the proxy with: curl 127.0.0.1 I was then expecting to see the custom header in the log, but it does not show: Nov 10 17:49:36 localhost haproxy[22355]: Foo - {} - The hardcoded "Foo" appears, so the log-format command is clearly working. But everything else renders as empty... are custom headers set after logging? How can one log a custom header? I am new to HAProxy so I think this may be some understanding I'm missing. (I start HAProxy with cmd sudo haproxy -f /etc/haproxy/haproxy.cfg and observe log with sudo tail -f /var/log/haproxy/haproxy.log . This is on HA-Proxy version 1.6.2)
logging, redhat, haproxy
5
13,398
1
https://stackoverflow.com/questions/47228549/log-haproxy-custom-header
39,365,386
Numeric keyboard. Dot instead of comma
In layman terms my goal is to change how the "dot" button on numeric keyboard behave. Now once tapped it produces a "comma". I need it to produce a "dot". After research I started toting with locale. Apparently my locale is set to en_US: [xxx@xxx ~]$ locale LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL= I've looked into what i presume is a proper config file for this particular locale: /usr/share/i18n/locales/en_US and looked for anything that might be related to "dot", "decimal separator" etc. Found LC_MONETARY and LC_NUMERIC, however mon_decimal_point for monetary and decimal_point for numeric were already set to - which I'm quite sure is a "dot". Just for giggles I also changed mon_thousands_sep and thousands_sep to and restarted. No help here. My machine: RHEL xxxx@xxxxx ~]$ uname -a Linux xxxxxx 2.6.32-642.4.2.el6.x86_64 #1 SMP Mon Aug 15 02:06:41 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux Now - this is a corporate computer with some strict security policies in place, so it would not be possible for me to just yum -install some_magic_keyboard_mapping_app I need to change it the old style. I have a virtual machine set up, so I can mess it up as much as i want prior to changing things on my work laptop.
Numeric keyboard. Dot instead of comma In layman terms my goal is to change how the "dot" button on numeric keyboard behave. Now once tapped it produces a "comma". I need it to produce a "dot". After research I started toting with locale. Apparently my locale is set to en_US: [xxx@xxx ~]$ locale LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL= I've looked into what i presume is a proper config file for this particular locale: /usr/share/i18n/locales/en_US and looked for anything that might be related to "dot", "decimal separator" etc. Found LC_MONETARY and LC_NUMERIC, however mon_decimal_point for monetary and decimal_point for numeric were already set to - which I'm quite sure is a "dot". Just for giggles I also changed mon_thousands_sep and thousands_sep to and restarted. No help here. My machine: RHEL xxxx@xxxxx ~]$ uname -a Linux xxxxxx 2.6.32-642.4.2.el6.x86_64 #1 SMP Mon Aug 15 02:06:41 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux Now - this is a corporate computer with some strict security policies in place, so it would not be possible for me to just yum -install some_magic_keyboard_mapping_app I need to change it the old style. I have a virtual machine set up, so I can mess it up as much as i want prior to changing things on my work laptop.
linux, locale, redhat
5
5,684
2
https://stackoverflow.com/questions/39365386/numeric-keyboard-dot-instead-of-comma
32,561,962
Install nodejs 4 on redhat
Nodejs version 4 has been released and installed on my windows machine. I'm trying to install the package trough yum on redhat but i'm not getting the latest version. i tried: sudo yum install -y nodejs but the lastest 4.0 version is not installed. How do i install nodejs 4.0 on a redhat machine?
Install nodejs 4 on redhat Nodejs version 4 has been released and installed on my windows machine. I'm trying to install the package trough yum on redhat but i'm not getting the latest version. i tried: sudo yum install -y nodejs but the lastest 4.0 version is not installed. How do i install nodejs 4.0 on a redhat machine?
linux, node.js, redhat
5
5,006
5
https://stackoverflow.com/questions/32561962/install-nodejs-4-on-redhat
62,432,847
Is there a way to catch stack overflow in a process? C++ Linux
I have this following code which goes into infinite recursion and triggers a seg fault when it exhausts the stack limit allocated to it. I am trying to capture this segmentation fault and exit gracefully. However, I was not able to catch this segmentation fault in any of the signal numbers. (A customer is facing this issue and wants a solution for such a use-case. Increasing the stack size by something like "limit stacksize 128M" makes his test pass. However, he is asking for a graceful exit rather than a seg fault. The following code simply reproduces the actual issue not what the actual algorithm does). Any help is appreciated. If something is incorrect in the way I am trying to catch the signal please let me know that too. To compile: g++ test.cc -std=c++0x #include <iostream> #include <signal.h> #include <stdio.h> #include <stdlib.h> #include <string> #include <string.h> int recurse_and_crash (int val) { // Print rough call stack depth at intervals. if ((val %1000) == 0) { std::cout << "\nval: " << val; } return val + recurse_and_crash (val+1); } void signal_handler(int signal, siginfo_t * si, void * arg) { std::cout << "Caught segfault\n"; exit(0); } int main(int argc, char ** argv) { int signal = 11; // SIGSEGV if (argc == 2) { signal = std::stoi(std::string(argv[1])); } struct sigaction sa; memset(&sa, 0, sizeof(struct sigaction)); sigemptyset(&sa.sa_mask); sa.sa_sigaction = signal_handler; sa.sa_flags = SA_SIGINFO; sigaction(signal, &sa, NULL); recurse_and_crash (1); }
Is there a way to catch stack overflow in a process? C++ Linux I have this following code which goes into infinite recursion and triggers a seg fault when it exhausts the stack limit allocated to it. I am trying to capture this segmentation fault and exit gracefully. However, I was not able to catch this segmentation fault in any of the signal numbers. (A customer is facing this issue and wants a solution for such a use-case. Increasing the stack size by something like "limit stacksize 128M" makes his test pass. However, he is asking for a graceful exit rather than a seg fault. The following code simply reproduces the actual issue not what the actual algorithm does). Any help is appreciated. If something is incorrect in the way I am trying to catch the signal please let me know that too. To compile: g++ test.cc -std=c++0x #include <iostream> #include <signal.h> #include <stdio.h> #include <stdlib.h> #include <string> #include <string.h> int recurse_and_crash (int val) { // Print rough call stack depth at intervals. if ((val %1000) == 0) { std::cout << "\nval: " << val; } return val + recurse_and_crash (val+1); } void signal_handler(int signal, siginfo_t * si, void * arg) { std::cout << "Caught segfault\n"; exit(0); } int main(int argc, char ** argv) { int signal = 11; // SIGSEGV if (argc == 2) { signal = std::stoi(std::string(argv[1])); } struct sigaction sa; memset(&sa, 0, sizeof(struct sigaction)); sigemptyset(&sa.sa_mask); sa.sa_sigaction = signal_handler; sa.sa_flags = SA_SIGINFO; sigaction(signal, &sa, NULL); recurse_and_crash (1); }
c++, linux, recursion, stack, redhat
5
1,887
1
https://stackoverflow.com/questions/62432847/is-there-a-way-to-catch-stack-overflow-in-a-process-c-linux
15,435,751
Installing 32 bit libraries (glibc) on 64 bit RHEL without using yum
I'm trying to get a 32-bit application to run on 64 bit RHEL 6.1, and the machine does not have access to the internet. Is there any way to install 32 bit glibc on 64 bit RHEL without using yum, i.e. just using RPM installs? I grabbed the glibc-*i686.rpm and many of its dependencies from the RHEL 6.1 ISO including nss-softokn-freebl*i686.rpm, but I still can't get it to install without ignoring dependencies (rpm --nodeps).
Installing 32 bit libraries (glibc) on 64 bit RHEL without using yum I'm trying to get a 32-bit application to run on 64 bit RHEL 6.1, and the machine does not have access to the internet. Is there any way to install 32 bit glibc on 64 bit RHEL without using yum, i.e. just using RPM installs? I grabbed the glibc-*i686.rpm and many of its dependencies from the RHEL 6.1 ISO including nss-softokn-freebl*i686.rpm, but I still can't get it to install without ignoring dependencies (rpm --nodeps).
linux, redhat, rhel
5
65,585
1
https://stackoverflow.com/questions/15435751/installing-32-bit-libraries-glibc-on-64-bit-rhel-without-using-yum
55,820,850
MySQL FAIL on upgrade from v5.1 to v8.* -- How to recover data
I am currently running a Redhat Server: Distributor ID: RedHatEnterpriseServer Description: Red Hat Enterprise Linux Server release 6.10 (Santiago) Release: 6.10 Codename: Santiago and previously had MySQL version 5.1 installed. I needed to upgrade MySQL to version > 5.6 so first I exported all the databases with: mysqldump [options] > dump.sql . I downloaded the rpm: mysql80-community-release-el6-2.noarch.rpm and ran: sudo rpm -Uvh mysql80-community-release-el6-2.noarch.rpm sudo yum -y update mysql* rpm -qa | grep mysql mysql-community-libs-8.0.15-1.el6.x86_64 mysql80-community-release-el6-2.noarch mysql-community-server-8.0.15-1.el6.x86_64 mysql-community-common-8.0.15-1.el6.x86_64 mysql-community-libs-compat-8.0.15-1.el6.x86_64 mysql-community-client-8.0.15-1.el6.x86_64 Now my problem arises when I try starting mysql: sudo service mysqld start Obviously, it won't start and here are the logs: 2019-04-23T22:04:09.953724Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.15) starting as process 16303 2019-04-23T22:04:10.072176Z 1 [ERROR] [MY-013090] [InnoDB] Unsupported redo log format (0). The redo log was created before MySQL 5.7.9 2019-04-23T22:04:10.072217Z 1 [ERROR] [MY-012930] [InnoDB] Plugin initialization aborted with error Generic error. 2019-04-23T22:04:10.672999Z 1 [ERROR] [MY-011013] [Server] Failed to initialize DD Storage Engine. 2019-04-23T22:04:10.673398Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2019-04-23T22:04:10.673567Z 0 [ERROR] [MY-010119] [Server] Aborting 2019-04-23T22:04:10.674496Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.15) MySQL Community Server - GPL. 2019-04-23T22:07:57.788396Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release. 2019-04-23T22:07:57.788446Z 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead. 2019-04-23T22:07:57.790578Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.15) starting as process 16750 2019-04-23T22:07:59.289318Z 1 [ERROR] [MY-013168] [InnoDB] Cannot upgrade server earlier than 5.7 to 8.0 2019-04-23T22:08:04.399358Z 1 [ERROR] [MY-011013] [Server] Failed to initialize DD Storage Engine. 2019-04-23T22:08:04.399712Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2019-04-23T22:08:04.400111Z 0 [ERROR] [MY-010119] [Server] Aborting 2019-04-23T22:08:04.401574Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.15) MySQL Community Server - GPL. 2019-04-23T22:23:36.368160Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release. 2019-04-23T22:23:36.368200Z 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead. 2019-04-23T22:23:36.370634Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.15) starting as process 17915 2019-04-23T22:23:36.772757Z 1 [ERROR] [MY-013168] [InnoDB] Cannot upgrade server earlier than 5.7 to 8.0 2019-04-23T22:23:41.882681Z 1 [ERROR] [MY-011013] [Server] Failed to initialize DD Storage Engine. 2019-04-23T22:23:41.883054Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2019-04-23T22:23:41.883362Z 0 [ERROR] [MY-010119] [Server] Aborting 2019-04-23T22:23:41.884282Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.15) MySQL Community Server - GPL. 2019-04-23T23:01:40.642684Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release. 2019-04-23T23:01:40.642724Z 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead. 2019-04-23T23:01:40.646798Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.15) starting as process 20087 2019-04-23T23:01:41.612258Z 1 [ERROR] [MY-013168] [InnoDB] Cannot upgrade server earlier than 5.7 to 8.0 2019-04-23T23:01:46.720034Z 1 [ERROR] [MY-011013] [Server] Failed to initialize DD Storage Engine. 2019-04-23T23:01:46.720414Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2019-04-23T23:01:46.720645Z 0 [ERROR] [MY-010119] [Server] Aborting 2019-04-23T23:01:46.721479Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.15) MySQL Community Server - GPL. 2019-04-23T23:38:32.052191Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release. 2019-04-23T23:38:32.052242Z 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead. 2019-04-23T23:38:32.056515Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.15) starting as process 22326 2019-04-23T23:38:32.619209Z 1 [ERROR] [MY-013168] [InnoDB] Cannot upgrade server earlier than 5.7 to 8.0 2019-04-23T23:38:37.728572Z 1 [ERROR] [MY-011013] [Server] Failed to initialize DD Storage Engine. 2019-04-23T23:38:37.729004Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2019-04-23T23:38:37.729451Z 0 [ERROR] [MY-010119] [Server] Aborting 2019-04-23T23:38:37.730880Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.15) MySQL Community Server - GPL. 2019-04-23T23:38:32.619209Z 1 [ERROR] [MY-013168] [InnoDB] Cannot upgrade server earlier than 5.7 to 8.0 What steps are needed to be done to upgrade MySQL corretly while preserving the data in the databases? Right now, I only have the Schemas backed up. A fresh install would be the last resort, so is it possible at my current state to recover the database data and perform an upgrade correctly? Edit I uninstalled MySQL8.0 and installed V 5.6 but on start I am getting this error in the logs - I'm not sure what to do from here: 2019-04-24 10:41:07 14335 [Note] Plugin 'FEDERATED' is disabled. 2019-04-24 10:41:07 14335 [Note] InnoDB: Using atomics to ref count buffer poolpages 2019-04-24 10:41:07 14335 [Note] InnoDB: The InnoDB memory heap is disabled 2019-04-24 10:41:07 14335 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2019-04-24 10:41:07 14335 [Note] InnoDB: Memory barrier is not used 2019-04-24 10:41:07 14335 [Note] InnoDB: Compressed tables use zlib 1.2.11 2019-04-24 10:41:07 14335 [Note] InnoDB: Using Linux native AIO 2019-04-24 10:41:07 14335 [Note] InnoDB: Using CPU crc32 instructions 2019-04-24 10:41:07 14335 [Note] InnoDB: Initializing buffer pool, size = 128.0M 2019-04-24 10:41:07 14335 [Note] InnoDB: Completed initialization of buffer pool 2019-04-24 10:41:07 14335 [ERROR] InnoDB: auto-extending data file ./ibdata1 is of a different size 640 pages (rounded down to MB) than specified in the .cnf file: initial 768 pages, max 0 (relevant if non-zero) pages! 2019-04-24 10:41:07 14335 [ERROR] InnoDB: Could not open or create the system tablespace. If you tried to add new data files to the system tablespace, and it failed here, you should now edit innodb_data_file_path in my.cnf back to what it was, and remove the new ibdata files InnoDB created in this failed attempt. InnoDB only wrote those files full of zeros, but did not yet use them in any way. But be careful: do not remove old data files which contain your precious data! 2019-04-24 10:41:07 14335 [ERROR] Plugin 'InnoDB' init function returned error. 2019-04-24 10:41:07 14335 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 2019-04-24 10:41:07 14335 [ERROR] Unknown/unsupported storage engine: InnoDB 2019-04-24 10:41:07 14335 [ERROR] Aborting 2019-04-24 10:41:07 14335 [Note] Binlog end 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'partition' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_FIELDS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_INDEXES' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_TABLES' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_FT_CONFIG' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_FT_DELETED' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWOR D' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_METRICS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESE T' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_CMPMEM' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_CMP_RESET' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_CMP' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_LOCK_WAITS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_LOCKS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_TRX' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'BLACKHOLE' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'ARCHIVE' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'MRG_MYISAM' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'MEMORY' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'CSV' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'MyISAM' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'sha256_password' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'mysql_old_password' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'mysql_native_password' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'binlog' 2019-04-24 10:41:07 14335 [Note] /usr/sbin/mysqld: Shutdown complete
MySQL FAIL on upgrade from v5.1 to v8.* -- How to recover data I am currently running a Redhat Server: Distributor ID: RedHatEnterpriseServer Description: Red Hat Enterprise Linux Server release 6.10 (Santiago) Release: 6.10 Codename: Santiago and previously had MySQL version 5.1 installed. I needed to upgrade MySQL to version > 5.6 so first I exported all the databases with: mysqldump [options] > dump.sql . I downloaded the rpm: mysql80-community-release-el6-2.noarch.rpm and ran: sudo rpm -Uvh mysql80-community-release-el6-2.noarch.rpm sudo yum -y update mysql* rpm -qa | grep mysql mysql-community-libs-8.0.15-1.el6.x86_64 mysql80-community-release-el6-2.noarch mysql-community-server-8.0.15-1.el6.x86_64 mysql-community-common-8.0.15-1.el6.x86_64 mysql-community-libs-compat-8.0.15-1.el6.x86_64 mysql-community-client-8.0.15-1.el6.x86_64 Now my problem arises when I try starting mysql: sudo service mysqld start Obviously, it won't start and here are the logs: 2019-04-23T22:04:09.953724Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.15) starting as process 16303 2019-04-23T22:04:10.072176Z 1 [ERROR] [MY-013090] [InnoDB] Unsupported redo log format (0). The redo log was created before MySQL 5.7.9 2019-04-23T22:04:10.072217Z 1 [ERROR] [MY-012930] [InnoDB] Plugin initialization aborted with error Generic error. 2019-04-23T22:04:10.672999Z 1 [ERROR] [MY-011013] [Server] Failed to initialize DD Storage Engine. 2019-04-23T22:04:10.673398Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2019-04-23T22:04:10.673567Z 0 [ERROR] [MY-010119] [Server] Aborting 2019-04-23T22:04:10.674496Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.15) MySQL Community Server - GPL. 2019-04-23T22:07:57.788396Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release. 2019-04-23T22:07:57.788446Z 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead. 2019-04-23T22:07:57.790578Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.15) starting as process 16750 2019-04-23T22:07:59.289318Z 1 [ERROR] [MY-013168] [InnoDB] Cannot upgrade server earlier than 5.7 to 8.0 2019-04-23T22:08:04.399358Z 1 [ERROR] [MY-011013] [Server] Failed to initialize DD Storage Engine. 2019-04-23T22:08:04.399712Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2019-04-23T22:08:04.400111Z 0 [ERROR] [MY-010119] [Server] Aborting 2019-04-23T22:08:04.401574Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.15) MySQL Community Server - GPL. 2019-04-23T22:23:36.368160Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release. 2019-04-23T22:23:36.368200Z 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead. 2019-04-23T22:23:36.370634Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.15) starting as process 17915 2019-04-23T22:23:36.772757Z 1 [ERROR] [MY-013168] [InnoDB] Cannot upgrade server earlier than 5.7 to 8.0 2019-04-23T22:23:41.882681Z 1 [ERROR] [MY-011013] [Server] Failed to initialize DD Storage Engine. 2019-04-23T22:23:41.883054Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2019-04-23T22:23:41.883362Z 0 [ERROR] [MY-010119] [Server] Aborting 2019-04-23T22:23:41.884282Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.15) MySQL Community Server - GPL. 2019-04-23T23:01:40.642684Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release. 2019-04-23T23:01:40.642724Z 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead. 2019-04-23T23:01:40.646798Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.15) starting as process 20087 2019-04-23T23:01:41.612258Z 1 [ERROR] [MY-013168] [InnoDB] Cannot upgrade server earlier than 5.7 to 8.0 2019-04-23T23:01:46.720034Z 1 [ERROR] [MY-011013] [Server] Failed to initialize DD Storage Engine. 2019-04-23T23:01:46.720414Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2019-04-23T23:01:46.720645Z 0 [ERROR] [MY-010119] [Server] Aborting 2019-04-23T23:01:46.721479Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.15) MySQL Community Server - GPL. 2019-04-23T23:38:32.052191Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release. 2019-04-23T23:38:32.052242Z 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead. 2019-04-23T23:38:32.056515Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.15) starting as process 22326 2019-04-23T23:38:32.619209Z 1 [ERROR] [MY-013168] [InnoDB] Cannot upgrade server earlier than 5.7 to 8.0 2019-04-23T23:38:37.728572Z 1 [ERROR] [MY-011013] [Server] Failed to initialize DD Storage Engine. 2019-04-23T23:38:37.729004Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2019-04-23T23:38:37.729451Z 0 [ERROR] [MY-010119] [Server] Aborting 2019-04-23T23:38:37.730880Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.15) MySQL Community Server - GPL. 2019-04-23T23:38:32.619209Z 1 [ERROR] [MY-013168] [InnoDB] Cannot upgrade server earlier than 5.7 to 8.0 What steps are needed to be done to upgrade MySQL corretly while preserving the data in the databases? Right now, I only have the Schemas backed up. A fresh install would be the last resort, so is it possible at my current state to recover the database data and perform an upgrade correctly? Edit I uninstalled MySQL8.0 and installed V 5.6 but on start I am getting this error in the logs - I'm not sure what to do from here: 2019-04-24 10:41:07 14335 [Note] Plugin 'FEDERATED' is disabled. 2019-04-24 10:41:07 14335 [Note] InnoDB: Using atomics to ref count buffer poolpages 2019-04-24 10:41:07 14335 [Note] InnoDB: The InnoDB memory heap is disabled 2019-04-24 10:41:07 14335 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2019-04-24 10:41:07 14335 [Note] InnoDB: Memory barrier is not used 2019-04-24 10:41:07 14335 [Note] InnoDB: Compressed tables use zlib 1.2.11 2019-04-24 10:41:07 14335 [Note] InnoDB: Using Linux native AIO 2019-04-24 10:41:07 14335 [Note] InnoDB: Using CPU crc32 instructions 2019-04-24 10:41:07 14335 [Note] InnoDB: Initializing buffer pool, size = 128.0M 2019-04-24 10:41:07 14335 [Note] InnoDB: Completed initialization of buffer pool 2019-04-24 10:41:07 14335 [ERROR] InnoDB: auto-extending data file ./ibdata1 is of a different size 640 pages (rounded down to MB) than specified in the .cnf file: initial 768 pages, max 0 (relevant if non-zero) pages! 2019-04-24 10:41:07 14335 [ERROR] InnoDB: Could not open or create the system tablespace. If you tried to add new data files to the system tablespace, and it failed here, you should now edit innodb_data_file_path in my.cnf back to what it was, and remove the new ibdata files InnoDB created in this failed attempt. InnoDB only wrote those files full of zeros, but did not yet use them in any way. But be careful: do not remove old data files which contain your precious data! 2019-04-24 10:41:07 14335 [ERROR] Plugin 'InnoDB' init function returned error. 2019-04-24 10:41:07 14335 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 2019-04-24 10:41:07 14335 [ERROR] Unknown/unsupported storage engine: InnoDB 2019-04-24 10:41:07 14335 [ERROR] Aborting 2019-04-24 10:41:07 14335 [Note] Binlog end 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'partition' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_FIELDS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_INDEXES' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_SYS_TABLES' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_FT_CONFIG' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_FT_DELETED' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWOR D' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_METRICS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESE T' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_CMPMEM' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_CMP_RESET' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_CMP' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_LOCK_WAITS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_LOCKS' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'INNODB_TRX' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'BLACKHOLE' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'ARCHIVE' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'MRG_MYISAM' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'MEMORY' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'CSV' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'MyISAM' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'sha256_password' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'mysql_old_password' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'mysql_native_password' 2019-04-24 10:41:07 14335 [Note] Shutting down plugin 'binlog' 2019-04-24 10:41:07 14335 [Note] /usr/sbin/mysqld: Shutdown complete
mysql, upgrade, redhat, rpm, yum
5
15,114
1
https://stackoverflow.com/questions/55820850/mysql-fail-on-upgrade-from-v5-1-to-v8-how-to-recover-data
45,709,068
docker - driver &quot;devicemapper&quot; failed to remove root filesystem after process in container killed
I am using Docker version 17.06.0-ce on Redhat with devicemapper storage. I am launching a container running a long-running service. The master process inside the container sometimes dies for whatever reason. I get the following error message. /bin/bash: line 1: 40 Killed python -u scripts/server.py start go I would like the container to exit and to be restarted by docker. However docker never exits. If I do it manually I get the following error: Error response from daemon: driver "devicemapper" failed to remove root filesystem. After googling, I tried a bunch of things: docker rm -f <container> rm -f <pth to mount> umount <pth to mount> All result in device is busy. The only remedy right now is to reboot the host system which is obviously not a long-term solution. Any ideas?
docker - driver &quot;devicemapper&quot; failed to remove root filesystem after process in container killed I am using Docker version 17.06.0-ce on Redhat with devicemapper storage. I am launching a container running a long-running service. The master process inside the container sometimes dies for whatever reason. I get the following error message. /bin/bash: line 1: 40 Killed python -u scripts/server.py start go I would like the container to exit and to be restarted by docker. However docker never exits. If I do it manually I get the following error: Error response from daemon: driver "devicemapper" failed to remove root filesystem. After googling, I tried a bunch of things: docker rm -f <container> rm -f <pth to mount> umount <pth to mount> All result in device is busy. The only remedy right now is to reboot the host system which is obviously not a long-term solution. Any ideas?
docker, redhat, device-mapper
5
6,526
1
https://stackoverflow.com/questions/45709068/docker-driver-devicemapper-failed-to-remove-root-filesystem-after-process-in
18,570,177
getpwuid() returns NULL for LDAP user
I'm having issues retrieving current user information of Red Hat Enterprise 6 where the user is an LDAP user? I have some code (actually part of an installation tool) that needs to retrieve the user name, home directory and other details. It is using the getpwuid() call to do this based on the user id. A simplified breakdown: uid_t uid = getuid(); printf("UID = %d\n", uid); errno = 0; struct passwd* udetails = getpwuid(uid); if (udetails != NULL) { printf("User name = %s\n", udetails->pw_name); } else { printf("getpwuid returns NULL, errno=%d\n", errno); } This works without problems where the user is a local user (in that system's /etc/passwd). When the user is an LDAP-authenticated user, the call the getuid returns the user ID or the current user, but the call to getpwuid returns 0, with no error code set in errno. According to the documentation, this means that the user doesn't exist. Should this work? According to the getpwuid manpage: The getpwnam() function returns a pointer to a structure containing the broken-out fields of the record in the password database (e.g., the local password file /etc/passwd, NIS, and LDAP) that matches the username name. The getpwuid() function returns a pointer to a structure containing the broken-out fields of the record in the password database that matches the user ID uid. Is an alternative call required to get the details if the current user was authenticated by LDAP? Is it necessary to open the LDAP database in an application, or should the system call handle that? Additional: I have also now tried this on a RHEL 5 box authenticating against the same LDAP directory. Could this just be a configuration issue on the RHEL 6 box? Or a wider RHEL 6 issue? Additional: /etc/nsswitch.conf as requested by Basile Starynkevitch (commented lines removed): passwd: files sss shadow: files sss group: files sss hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files sss netgroup: files sss publickey: nisplus automount: files ldap aliases: files nisplus I'm guessing that some of these should mention ldap at some point? In fact this suggests that it's not using LDAP at all....
getpwuid() returns NULL for LDAP user I'm having issues retrieving current user information of Red Hat Enterprise 6 where the user is an LDAP user? I have some code (actually part of an installation tool) that needs to retrieve the user name, home directory and other details. It is using the getpwuid() call to do this based on the user id. A simplified breakdown: uid_t uid = getuid(); printf("UID = %d\n", uid); errno = 0; struct passwd* udetails = getpwuid(uid); if (udetails != NULL) { printf("User name = %s\n", udetails->pw_name); } else { printf("getpwuid returns NULL, errno=%d\n", errno); } This works without problems where the user is a local user (in that system's /etc/passwd). When the user is an LDAP-authenticated user, the call the getuid returns the user ID or the current user, but the call to getpwuid returns 0, with no error code set in errno. According to the documentation, this means that the user doesn't exist. Should this work? According to the getpwuid manpage: The getpwnam() function returns a pointer to a structure containing the broken-out fields of the record in the password database (e.g., the local password file /etc/passwd, NIS, and LDAP) that matches the username name. The getpwuid() function returns a pointer to a structure containing the broken-out fields of the record in the password database that matches the user ID uid. Is an alternative call required to get the details if the current user was authenticated by LDAP? Is it necessary to open the LDAP database in an application, or should the system call handle that? Additional: I have also now tried this on a RHEL 5 box authenticating against the same LDAP directory. Could this just be a configuration issue on the RHEL 6 box? Or a wider RHEL 6 issue? Additional: /etc/nsswitch.conf as requested by Basile Starynkevitch (commented lines removed): passwd: files sss shadow: files sss group: files sss hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files sss netgroup: files sss publickey: nisplus automount: files ldap aliases: files nisplus I'm guessing that some of these should mention ldap at some point? In fact this suggests that it's not using LDAP at all....
linux, ldap, redhat, getpwuid
5
8,631
3
https://stackoverflow.com/questions/18570177/getpwuid-returns-null-for-ldap-user
12,104,243
WHOIS server daemon
Is there any WHOIS server daemons to run on my serer and serve My requests? Is it possible to deploy own WHOIS server in the end of WHOIS hierarchy like DNS servers are?
WHOIS server daemon Is there any WHOIS server daemons to run on my serer and serve My requests? Is it possible to deploy own WHOIS server in the end of WHOIS hierarchy like DNS servers are?
linux, daemon, redhat, whois
5
5,025
2
https://stackoverflow.com/questions/12104243/whois-server-daemon
11,758,570
Linux daemon start up
i wrote one service on linux(Redhat Server Edition 5.1) . which is started by shell scritpt, In case when i start my application i manually start my service , now i want to start my service at boot time,by means i put my service on init.d folder by my daemon not invoke at boot time,any have idea how to start a daemon at boot time on linux? this my sample but is not working #!/bin/sh # # myservice This shell script takes care of starting and stopping # the <myservice> # # Source function library . /etc/rc.d/init.d/functions # Do preliminary checks here, if any #### START of preliminary checks ######### ##### END of preliminary checks ####### # Handle manual control parameters like start, stop, status, restart, etc. case "$1" in start) # Start daemons. echo -n $"Starting <myservice> daemon: " echo daemon <myservice> echo ;; stop) # Stop daemons. echo -n $"Shutting down <myservice>: " killproc <myservice> echo # Do clean-up works here like removing pid files from /var/run, etc. ;; status) status <myservice> ;; restart) $0 stop $0 start ;; *) echo $"Usage: $0 {start|stop|status|restart}" exit 1 esac exit 0
Linux daemon start up i wrote one service on linux(Redhat Server Edition 5.1) . which is started by shell scritpt, In case when i start my application i manually start my service , now i want to start my service at boot time,by means i put my service on init.d folder by my daemon not invoke at boot time,any have idea how to start a daemon at boot time on linux? this my sample but is not working #!/bin/sh # # myservice This shell script takes care of starting and stopping # the <myservice> # # Source function library . /etc/rc.d/init.d/functions # Do preliminary checks here, if any #### START of preliminary checks ######### ##### END of preliminary checks ####### # Handle manual control parameters like start, stop, status, restart, etc. case "$1" in start) # Start daemons. echo -n $"Starting <myservice> daemon: " echo daemon <myservice> echo ;; stop) # Stop daemons. echo -n $"Shutting down <myservice>: " killproc <myservice> echo # Do clean-up works here like removing pid files from /var/run, etc. ;; status) status <myservice> ;; restart) $0 stop $0 start ;; *) echo $"Usage: $0 {start|stop|status|restart}" exit 1 esac exit 0
linux, linux-kernel, daemon, redhat
5
13,227
4
https://stackoverflow.com/questions/11758570/linux-daemon-start-up
9,009,776
Yum install of home-made RPM giving error
I am trying to isntall something using "yum install my.rpm" The problem is I am getting TypeError: an integer is required error: python callback <bound method RPMTransaction.callback of <yum.rpmtrans.RPMTransaction instance at 0x013e3f8>> failed, aborting! What does this mean? I turned on verbosity of the yum install, cant figure anything out. This is RHEL 6.1 Thanks
Yum install of home-made RPM giving error I am trying to isntall something using "yum install my.rpm" The problem is I am getting TypeError: an integer is required error: python callback <bound method RPMTransaction.callback of <yum.rpmtrans.RPMTransaction instance at 0x013e3f8>> failed, aborting! What does this mean? I turned on verbosity of the yum install, cant figure anything out. This is RHEL 6.1 Thanks
linux, redhat, rpm, yum, rhel
5
3,096
2
https://stackoverflow.com/questions/9009776/yum-install-of-home-made-rpm-giving-error
31,408,160
How to use a different .bashrc
I got the common .bashrc in my /home/ folder. But I have another .basrch (.bashrc1) (I have a lot of aliases) I cannot copy the content from one to another. So. I want to know if there is a possibility to use the .bashrc1 as default or if there is an additional command to execute the aliases that are into the .bashrc1 Thanks
How to use a different .bashrc I got the common .bashrc in my /home/ folder. But I have another .basrch (.bashrc1) (I have a lot of aliases) I cannot copy the content from one to another. So. I want to know if there is a possibility to use the .bashrc1 as default or if there is an additional command to execute the aliases that are into the .bashrc1 Thanks
linux, bash, redhat
5
3,053
1
https://stackoverflow.com/questions/31408160/how-to-use-a-different-bashrc
15,056,762
Change install script from Redhat to Ubuntu
An install script (for Microsoft® SQL Server® ODBC Driver 1.0 for Linux ) has been written for Redhat with RPM It uses this code to check if certain packages are installed req_libs=( glibc e2fsprogs krb5-libs openssl ) for lib in ${req_libs[@]} do local present=$(rpm -q -a $lib) >> $log_file 2>&1 if [ "$present" == "" ]; then log "The $lib library was not found installed in the RPM database." log "See README for which libraries are required for the $driver_name." return 1; fi done I have overcome this problem by knowing/trusting that the libraries are installed and simply removing the test, but I'd like to tidy this up now. How can I find which libraries to look for on Ubuntu. Is there a command or translation webpage for Redhat -> Ubuntu Is replacing rpm -q -a with dpkg -s correct?
Change install script from Redhat to Ubuntu An install script (for Microsoft® SQL Server® ODBC Driver 1.0 for Linux ) has been written for Redhat with RPM It uses this code to check if certain packages are installed req_libs=( glibc e2fsprogs krb5-libs openssl ) for lib in ${req_libs[@]} do local present=$(rpm -q -a $lib) >> $log_file 2>&1 if [ "$present" == "" ]; then log "The $lib library was not found installed in the RPM database." log "See README for which libraries are required for the $driver_name." return 1; fi done I have overcome this problem by knowing/trusting that the libraries are installed and simply removing the test, but I'd like to tidy this up now. How can I find which libraries to look for on Ubuntu. Is there a command or translation webpage for Redhat -> Ubuntu Is replacing rpm -q -a with dpkg -s correct?
ubuntu, redhat, rpm, dpkg
5
1,596
1
https://stackoverflow.com/questions/15056762/change-install-script-from-redhat-to-ubuntu
13,954,195
How to get OCI lib to work on red hat machine with R Oracle?
I need to get OCI lib working on my rhel 6.3 machine and I am experiencing some trouble with OCI headers files that can't be found. I have installed (using yum install) oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm because this official page it's all I need to run OCI. To test the whole thing in general I've installed sqplus64, which worked after I set export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib . Unfortunately the headers files couldn't be found after setting LD_LIBRARY_PATH . Actually I am not surprised because there is no include directory in any of these oracle paths. So the question is: Where do I get these missing header files from? Are they actually already there and I just can find them? Btw: I am doing this whole exercise because I want to use ROracle on my R Studio server and this R package depends on the OCI library. Once I am back in R territory the road gets much less bumpier for me. EDIT: this documentation helped me a little further. However, I guess I found some header files now in: "/usr/include/oracle/11.2/client64". But which variable do I have to set to this location?
How to get OCI lib to work on red hat machine with R Oracle? I need to get OCI lib working on my rhel 6.3 machine and I am experiencing some trouble with OCI headers files that can't be found. I have installed (using yum install) oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm because this official page it's all I need to run OCI. To test the whole thing in general I've installed sqplus64, which worked after I set export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib . Unfortunately the headers files couldn't be found after setting LD_LIBRARY_PATH . Actually I am not surprised because there is no include directory in any of these oracle paths. So the question is: Where do I get these missing header files from? Are they actually already there and I just can find them? Btw: I am doing this whole exercise because I want to use ROracle on my R Studio server and this R package depends on the OCI library. Once I am back in R territory the road gets much less bumpier for me. EDIT: this documentation helped me a little further. However, I guess I found some header files now in: "/usr/include/oracle/11.2/client64". But which variable do I have to set to this location?
oracle-database, r, redhat, oracle-call-interface, rstudio-server
5
10,085
2
https://stackoverflow.com/questions/13954195/how-to-get-oci-lib-to-work-on-red-hat-machine-with-r-oracle
11,995,894
umask setting changes after cd
I've got something odd to report. On my newly configured RHEL5 server my shell is set to /bin/bash I have umask set to 002 in .bashrc. When I first log in, umask appears to work correctly: $ touch a $ ls -l a -rw-rw-r-- etc..... if I create another file it works: $ touch b $ ls -l b -rw-rw-r-- etc..... but... if I change directory (to any directory), then umask gets set back 022: $ cd /var/www/whatever $ touch c $ ls -l c -rw-r--r-- etc..... completely bizarre. Anybody seen anything like this? Can they think of anything to check? why would the umask setting change after cd'ing? Thanks, -Charlie
umask setting changes after cd I've got something odd to report. On my newly configured RHEL5 server my shell is set to /bin/bash I have umask set to 002 in .bashrc. When I first log in, umask appears to work correctly: $ touch a $ ls -l a -rw-rw-r-- etc..... if I create another file it works: $ touch b $ ls -l b -rw-rw-r-- etc..... but... if I change directory (to any directory), then umask gets set back 022: $ cd /var/www/whatever $ touch c $ ls -l c -rw-r--r-- etc..... completely bizarre. Anybody seen anything like this? Can they think of anything to check? why would the umask setting change after cd'ing? Thanks, -Charlie
linux, redhat, umask
5
1,581
2
https://stackoverflow.com/questions/11995894/umask-setting-changes-after-cd
9,171,983
any met python import paramiko and Crypto err like &quot;Not using mpz_powm_sec.&quot;?
OS: redhat 5.2 i386 python: 2.7 err like: Python 2.7.2 (default, Feb 7 2012, 11:16:30) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko /home/master/local/lib/python2.7/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability. _warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning) it's my libgmp version: ldconfig -p |grep libgmp libgmpxx.so.3 (libc6, hwcap: 0x0000000004000000) => /usr/lib/sse2/libgmpxx.so.3 libgmpxx.so.3 (libc6) => /usr/lib/libgmpxx.so.3 libgmpxx.so (libc6) => /usr/lib/libgmpxx.so libgmp.so.3 (libc6, hwcap: 0x0000000004000000) => /usr/lib/sse2/libgmp.so.3 libgmp.so.3 (libc6) => /usr/lib/libgmp.so.3 libgmp.so (libc6) => /usr/lib/libgmp.so all above seems like related to libgmp,that confused me.PLZ show me some suggestion,thx!
any met python import paramiko and Crypto err like &quot;Not using mpz_powm_sec.&quot;? OS: redhat 5.2 i386 python: 2.7 err like: Python 2.7.2 (default, Feb 7 2012, 11:16:30) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko /home/master/local/lib/python2.7/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability. _warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning) it's my libgmp version: ldconfig -p |grep libgmp libgmpxx.so.3 (libc6, hwcap: 0x0000000004000000) => /usr/lib/sse2/libgmpxx.so.3 libgmpxx.so.3 (libc6) => /usr/lib/libgmpxx.so.3 libgmpxx.so (libc6) => /usr/lib/libgmpxx.so libgmp.so.3 (libc6, hwcap: 0x0000000004000000) => /usr/lib/sse2/libgmp.so.3 libgmp.so.3 (libc6) => /usr/lib/libgmp.so.3 libgmp.so (libc6) => /usr/lib/libgmp.so all above seems like related to libgmp,that confused me.PLZ show me some suggestion,thx!
python, linux, redhat
5
12,184
2
https://stackoverflow.com/questions/9171983/any-met-python-import-paramiko-and-crypto-err-like-not-using-mpz-powm-sec
77,411,286
I can&#39;t use the command podman compose in RHEL 9
It seems I can't use the command podman compose in RHEL 9. If I understand correctly, podman should provide a wrapper for docker-compose and podman-compose as stated here: [URL] However, when I try to execute the command podman compose I get an error: [root@homeserver ~]# podman compose Error: unrecognized command podman compose Try 'podman --help' for more information I installed podman-docker and docker-compose (as standalone) and I'm able to run compose files via docker-compose. I'm curious about that wrapper though. Am I doing something wrong?
I can&#39;t use the command podman compose in RHEL 9 It seems I can't use the command podman compose in RHEL 9. If I understand correctly, podman should provide a wrapper for docker-compose and podman-compose as stated here: [URL] However, when I try to execute the command podman compose I get an error: [root@homeserver ~]# podman compose Error: unrecognized command podman compose Try 'podman --help' for more information I installed podman-docker and docker-compose (as standalone) and I'm able to run compose files via docker-compose. I'm curious about that wrapper though. Am I doing something wrong?
docker-compose, redhat, podman, redhat-containers, podman-compose
5
12,525
3
https://stackoverflow.com/questions/77411286/i-cant-use-the-command-podman-compose-in-rhel-9
38,378,707
could not access directory &quot;/usr/local/pgsql/data&quot;: Permission denied ,while trying to setup database cluster
I am trying to set up Postgres in Redhat. I do the following steps: $ sudo yum install postgresql-server postgresql-contrib It is successfully installed. Then i try to set up a database cluster. $ initdb -D /usr/local/pgsql/data I get the error: initdb: could not access directory "/usr/local/pgsql/data": Permission denied New to linux. Not being able to move forward Figured out the solution. I went to the specified folder and changed its access permission. It worked after that.
could not access directory &quot;/usr/local/pgsql/data&quot;: Permission denied ,while trying to setup database cluster I am trying to set up Postgres in Redhat. I do the following steps: $ sudo yum install postgresql-server postgresql-contrib It is successfully installed. Then i try to set up a database cluster. $ initdb -D /usr/local/pgsql/data I get the error: initdb: could not access directory "/usr/local/pgsql/data": Permission denied New to linux. Not being able to move forward Figured out the solution. I went to the specified folder and changed its access permission. It worked after that.
linux, postgresql, centos, redhat
5
21,504
3
https://stackoverflow.com/questions/38378707/could-not-access-directory-usr-local-pgsql-data-permission-denied-while-try
15,595,879
How to set environment variables for R to use in Tomcat on RedHat Linux (RHEL6)
I'm trying to set up R and Tomcat on RHEL6 (6.4) I have installed R and can run it. I have installed Tomcat 7 and can host files file. I have packaged an application as a WAR file and deployed it using tomcat. The application runs fine in all aspects until it uses any R component. This is where it crashes out with the following error as seen in catalina.out: Cannot find JRI native library! Please make sure that the JRI native library is in a directory listed in java.li brary.path. java.lang.UnsatisfiedLinkError: /usr/local/lib64/R-2.15.3/library/rJava/jri/libj ri.so: libR.so: cannot open shared object file: Too many levels of symbolic link s at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1750) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1675) at java.lang.Runtime.loadLibrary0(Runtime.java:840) at java.lang.System.loadLibrary(System.java:1047) at org.rosuda.JRI.Rengine.<clinit>(Rengine.java:19) I do have rJava installed under R: install.packages("rJava") It installed fine and I have rJava inside the R's library folder. I have defined the following in /etc/profile: export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre export R_HOME=/usr/local/lib64/R-2.15.3 PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$R_HOME/bin export PATH export LD_LIBRARY_PATH=$R_HOME/lib/libR.so,$JAVA_HOME/lib/amd64/server/libjvm.so To my understanding, that should set JAVA_HOME, R_HOME, PATH, and LD_LIBRARY_PATH globally for all users on the server. I know Tomcat runs under root and I can confirm that root was able to see all the above paths as set above via " echo $JAVA_HOME ", " echo $R_HOME ", " echo $LD_LIBRARY_PATH ", " echo $PATH " So I'm not sure why it's complaining that it can't open those .so files. Also, when it crashes out, it shuts down Tomcat. Thanks!
How to set environment variables for R to use in Tomcat on RedHat Linux (RHEL6) I'm trying to set up R and Tomcat on RHEL6 (6.4) I have installed R and can run it. I have installed Tomcat 7 and can host files file. I have packaged an application as a WAR file and deployed it using tomcat. The application runs fine in all aspects until it uses any R component. This is where it crashes out with the following error as seen in catalina.out: Cannot find JRI native library! Please make sure that the JRI native library is in a directory listed in java.li brary.path. java.lang.UnsatisfiedLinkError: /usr/local/lib64/R-2.15.3/library/rJava/jri/libj ri.so: libR.so: cannot open shared object file: Too many levels of symbolic link s at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1750) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1675) at java.lang.Runtime.loadLibrary0(Runtime.java:840) at java.lang.System.loadLibrary(System.java:1047) at org.rosuda.JRI.Rengine.<clinit>(Rengine.java:19) I do have rJava installed under R: install.packages("rJava") It installed fine and I have rJava inside the R's library folder. I have defined the following in /etc/profile: export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre export R_HOME=/usr/local/lib64/R-2.15.3 PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$R_HOME/bin export PATH export LD_LIBRARY_PATH=$R_HOME/lib/libR.so,$JAVA_HOME/lib/amd64/server/libjvm.so To my understanding, that should set JAVA_HOME, R_HOME, PATH, and LD_LIBRARY_PATH globally for all users on the server. I know Tomcat runs under root and I can confirm that root was able to see all the above paths as set above via " echo $JAVA_HOME ", " echo $R_HOME ", " echo $LD_LIBRARY_PATH ", " echo $PATH " So I'm not sure why it's complaining that it can't open those .so files. Also, when it crashes out, it shuts down Tomcat. Thanks!
linux, redhat, rhel, rjava, jri
5
4,445
2
https://stackoverflow.com/questions/15595879/how-to-set-environment-variables-for-r-to-use-in-tomcat-on-redhat-linux-rhel6
13,920,965
Cannot open display on RHEL
I am trying to ssh to a server (myserver) installed with RHEL 5.8 from a desktop client (mydesktop) with RHEL 6.2. I have group installed the "X Window" on the remote server, the DISPLAY variable on the remote server is also set to be localhost:0.0, but I still cannot get firefox started. The command to connect is $ ssh -X -l myname myserver The error message is $ firefox Error: cannot open display: localhost:0.0 I tried to execute the command on myserver below $ xhost +localhost but it gives me an error message xhost: unable to open display "localhost:0.0" There are three phenomena I want to mention another user of mydesktop is able to start firefox after logging into myserver. I was able to start firefox when I remotely logged into another server: myserver2. firefox is just an example. In general, I cannot launch any x window programs. I have no idea what is going on. Please help me. This is an update of my problem. The problem was solved "partially". What I did was to delete the "export DISPLAY==localhost:0.0" from my ".bashrc" file, logout and then login again and I can start firefox!!! However, this is not the end of the story. I have a new problem: $ sudo wireshark does not work. Here is the error message: [myself@myserver ~]$ sudo wireshark debug1: client_input_channel_open: ctype x11 rchan 2 win 65536 max 16384 debug1: client_request_x11: request from 127.0.0.1 46595 debug1: channel 1: new [x11] debug1: confirm x11 debug1: client_input_channel_open: ctype x11 rchan 3 win 65536 max 16384 debug1: client_request_x11: request from 127.0.0.1 46596 debug1: channel 2: new [x11] debug1: confirm x11 X11 connection rejected because of wrong authentication. debug1: channel 2: free: x11, nchannels 3 The application 'wireshark' lost its connection to the display localhost:10.0; most likely the X server was shut down or you killed/destroyed the application. debug1: channel 1: FORCE input drain Why can't I start x window under sudo?
Cannot open display on RHEL I am trying to ssh to a server (myserver) installed with RHEL 5.8 from a desktop client (mydesktop) with RHEL 6.2. I have group installed the "X Window" on the remote server, the DISPLAY variable on the remote server is also set to be localhost:0.0, but I still cannot get firefox started. The command to connect is $ ssh -X -l myname myserver The error message is $ firefox Error: cannot open display: localhost:0.0 I tried to execute the command on myserver below $ xhost +localhost but it gives me an error message xhost: unable to open display "localhost:0.0" There are three phenomena I want to mention another user of mydesktop is able to start firefox after logging into myserver. I was able to start firefox when I remotely logged into another server: myserver2. firefox is just an example. In general, I cannot launch any x window programs. I have no idea what is going on. Please help me. This is an update of my problem. The problem was solved "partially". What I did was to delete the "export DISPLAY==localhost:0.0" from my ".bashrc" file, logout and then login again and I can start firefox!!! However, this is not the end of the story. I have a new problem: $ sudo wireshark does not work. Here is the error message: [myself@myserver ~]$ sudo wireshark debug1: client_input_channel_open: ctype x11 rchan 2 win 65536 max 16384 debug1: client_request_x11: request from 127.0.0.1 46595 debug1: channel 1: new [x11] debug1: confirm x11 debug1: client_input_channel_open: ctype x11 rchan 3 win 65536 max 16384 debug1: client_request_x11: request from 127.0.0.1 46596 debug1: channel 2: new [x11] debug1: confirm x11 X11 connection rejected because of wrong authentication. debug1: channel 2: free: x11, nchannels 3 The application 'wireshark' lost its connection to the display localhost:10.0; most likely the X server was shut down or you killed/destroyed the application. debug1: channel 1: FORCE input drain Why can't I start x window under sudo?
linux, firefox, x11, redhat
5
51,789
1
https://stackoverflow.com/questions/13920965/cannot-open-display-on-rhel
2,999,347
Linux per-process resource limits - a deep Red Hat Mystery
I have my own multithreaded C program which scales in speed smoothly with the number of CPU cores.. I can run it with 1, 2, 3, etc threads and get linear speedup.. up to about 5.5x speed on a 6-core CPU on a Ubuntu Linux box. I had an opportunity to run the program on a very high end Sunfire x4450 with 4 quad-core Xeon processors, running Red Hat Enterprise Linux. I was eagerly anticipating seeing how fast the 16 cores could run my program with 16 threads.. But it runs at the same speed as just TWO threads! Much hair-pulling and debugging later, I see that my program really is creating all the threads, they really are running simultaneously, but the threads themselves are slower than they should be. 2 threads runs about 1.7x faster than 1, but 3, 4, 8, 10, 16 threads all run at just net 1.9x! I can see all the threads are running (not stalled or sleeping), they're just slow. To check that the HARDWARE wasn't at fault, I ran SIXTEEN copies of my program independently, simultaneously. They all ran at full speed. There really are 16 cores and they really do run at full speed and there really is enough RAM (in fact this machine has 64GB, and I only use 1GB per process). So, my question is if there's some OPERATING SYSTEM explanation, perhaps some per-process resource limit which automatically scales back thread scheduling to keep one process from hogging the machine. Clues are: My program does not access the disk or network. It's CPU limited. Its speed scales linearly on a single CPU box in Ubuntu Linux with a hexacore i7 for 1-6 threads. 6 threads is effectively 6x speedup. My program never runs faster than 2x speedup on this 16 core Sunfire Xeon box, for any number of threads from 2-16. Running 16 copies of my program single threaded runs perfectly, all 16 running at once at full speed. top shows 1600% of CPUs allocated. /proc/cpuinfo shows all 16 cores running at full 2.9GHz speed (not low frequency idle speed of 1.6GHz) There's 48GB of RAM free, it is not swapping. What's happening? Is there some process CPU limit policy? How could I measure it if so? What else could explain this behavior? Thanks for your ideas to solve this, the Great Xeon Slowdown Mystery of 2010!
Linux per-process resource limits - a deep Red Hat Mystery I have my own multithreaded C program which scales in speed smoothly with the number of CPU cores.. I can run it with 1, 2, 3, etc threads and get linear speedup.. up to about 5.5x speed on a 6-core CPU on a Ubuntu Linux box. I had an opportunity to run the program on a very high end Sunfire x4450 with 4 quad-core Xeon processors, running Red Hat Enterprise Linux. I was eagerly anticipating seeing how fast the 16 cores could run my program with 16 threads.. But it runs at the same speed as just TWO threads! Much hair-pulling and debugging later, I see that my program really is creating all the threads, they really are running simultaneously, but the threads themselves are slower than they should be. 2 threads runs about 1.7x faster than 1, but 3, 4, 8, 10, 16 threads all run at just net 1.9x! I can see all the threads are running (not stalled or sleeping), they're just slow. To check that the HARDWARE wasn't at fault, I ran SIXTEEN copies of my program independently, simultaneously. They all ran at full speed. There really are 16 cores and they really do run at full speed and there really is enough RAM (in fact this machine has 64GB, and I only use 1GB per process). So, my question is if there's some OPERATING SYSTEM explanation, perhaps some per-process resource limit which automatically scales back thread scheduling to keep one process from hogging the machine. Clues are: My program does not access the disk or network. It's CPU limited. Its speed scales linearly on a single CPU box in Ubuntu Linux with a hexacore i7 for 1-6 threads. 6 threads is effectively 6x speedup. My program never runs faster than 2x speedup on this 16 core Sunfire Xeon box, for any number of threads from 2-16. Running 16 copies of my program single threaded runs perfectly, all 16 running at once at full speed. top shows 1600% of CPUs allocated. /proc/cpuinfo shows all 16 cores running at full 2.9GHz speed (not low frequency idle speed of 1.6GHz) There's 48GB of RAM free, it is not swapping. What's happening? Is there some process CPU limit policy? How could I measure it if so? What else could explain this behavior? Thanks for your ideas to solve this, the Great Xeon Slowdown Mystery of 2010!
linux, redhat, ulimit, multithreading
5
3,327
3
https://stackoverflow.com/questions/2999347/linux-per-process-resource-limits-a-deep-red-hat-mystery
30,957,994
VI editor - saving filename in :wq
After typing whatever that I need in VI , i wanted to save it to a file with :wq filename. But if I have type in the wrong filename , there is no way for me to amend it. Moving the cursor to yth character and typing x will replace the wrong character(yth) with x Pressing backspace or delete will just move the cursor left instead of removing the wrong character Pressing esc or back arrow will save the file How do I change delete a wrong word/character in :wq wrongfilename ? e.g: wq wrongfilename -- I want to remove filename , how do I do that?
VI editor - saving filename in :wq After typing whatever that I need in VI , i wanted to save it to a file with :wq filename. But if I have type in the wrong filename , there is no way for me to amend it. Moving the cursor to yth character and typing x will replace the wrong character(yth) with x Pressing backspace or delete will just move the cursor left instead of removing the wrong character Pressing esc or back arrow will save the file How do I change delete a wrong word/character in :wq wrongfilename ? e.g: wq wrongfilename -- I want to remove filename , how do I do that?
linux, unix, centos, redhat, vi
5
6,042
2
https://stackoverflow.com/questions/30957994/vi-editor-saving-filename-in-wq
17,298,725
Selenium stalls at &quot;Launching Firefox...&quot;, no errors or exceptions
Trying to run Selenium on our RedHat box remotely just stays at "Launching Firefox..." without any error messages to go on. I have a symlink from /usr/bin/firefox that goes to /usr/lib64/firefox/firefox. The RedHat machine has Firefox ESR 17.0.6 installed. I'm using Xming and running Firefox by just typing "firefox" in the terminal works fine. I tried running Selenium through Xvfb, but it hangs at the same place (Xvfb verified working generally with "firefox &" and taking a screenshot). The below is the terminal input and output (anonymized): [user@redhat selenium-test]$ java -jar selenium-server-standalone.jar -trustAllSSLCertificates -htmlSuite "*firefox" [URL] suite_FILE.html tmp_results-FILE.html -firefoxProfileTemplate "/home/user/.mozilla/firefox/wwjnyifu.Selenium" Jun 25, 2013 2:51:41 PM org.openqa.grid.selenium.GridLauncher main INFO: Launching a standalone server 14:51:41.817 INFO - Java: Sun Microsystems Inc. 20.12-b01 14:51:41.818 INFO - OS: Linux 2.6.32-279.el6.x86_64 amd64 14:51:41.836 INFO - v2.33.0, with Core v2.33.0. Built from revision 4e90c97 14:51:41.981 INFO - RemoteWebDriver instances should connect to: [URL] 14:51:41.982 INFO - Version Jetty/5.1.x 14:51:41.983 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver] 14:51:41.983 INFO - Started HttpContext[/selenium-server,/selenium-server] 14:51:41.984 INFO - Started HttpContext[/,/] 14:51:52.538 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler@c0b76fa 14:51:52.538 INFO - Started HttpContext[/wd,/wd] 14:51:52.546 INFO - Started SocketListener on 0.0.0.0:4444 14:51:52.546 INFO - Started org.openqa.jetty.jetty.Server@b34bed0 jar:file:/home/user/selenium-test/selenium-server-standalone.jar!/customProfileDirCUSTFFCHROME 14:51:52.791 INFO - Preparing Firefox profile... 14:51:53.343 INFO - Launching Firefox... ^C15:03:18.657 INFO - Shutting down... I gave it almost 10 minutes before pressing CTRL+C. With debugging, not much more to go on: 08:40:37.183 INFO [10] org.openqa.grid.selenium.GridLauncher - Launching a standalone server 08:40:37.243 INFO [10] org.openqa.selenium.server.SeleniumServer - Writing debug logs to selenium.log 08:40:37.243 INFO [10] org.openqa.selenium.server.SeleniumServer - Java: Sun Microsystems Inc. 20.12-b01 08:40:37.243 INFO [10] org.openqa.selenium.server.SeleniumServer - OS: Linux 2.6.32-279.el6.x86_64 amd64 08:40:37.259 INFO [10] org.openqa.selenium.server.SeleniumServer - v2.33.0, with Core v2.33.0. Built from revision 4e90c97 08:40:37.420 INFO [10] org.openqa.selenium.server.SeleniumServer - RemoteWebDriver instances should connect to: [URL] 08:40:37.421 INFO [10] org.openqa.jetty.http.HttpServer - Version Jetty/5.1.x 08:40:37.422 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/selenium-server/driver,/selenium-server/driver] 08:40:37.423 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/selenium-server,/selenium-server] 08:40:37.423 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/,/] 08:40:37.439 INFO [10] org.openqa.jetty.util.Container - Started org.openqa.jetty.jetty.servlet.ServletHandler@851052d 08:40:37.439 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/wd,/wd] 08:40:37.444 INFO [10] org.openqa.jetty.http.SocketListener - Started SocketListener on 0.0.0.0:4444 08:40:37.445 INFO [10] org.openqa.jetty.util.Container - Started org.openqa.jetty.jetty.Server@252f0999 08:40:37.737 INFO [10] org.openqa.selenium.server.browserlaunchers.FirefoxChromeLauncher - Preparing Firefox profile... 08:40:38.289 INFO [10] org.openqa.selenium.server.browserlaunchers.FirefoxChromeLauncher - Launching Firefox... 08:42:56.271 INFO [10] org.openqa.grid.selenium.GridLauncher - Launching a standalone server 08:42:56.335 INFO [10] org.openqa.selenium.server.SeleniumServer - Writing debug logs to selenium.log 08:42:56.336 INFO [10] org.openqa.selenium.server.SeleniumServer - Java: Sun Microsystems Inc. 20.12-b01 08:42:56.336 INFO [10] org.openqa.selenium.server.SeleniumServer - OS: Linux 2.6.32-279.el6.x86_64 amd64 08:42:56.356 INFO [10] org.openqa.selenium.server.SeleniumServer - v2.33.0, with Core v2.33.0. Built from revision 4e90c97 08:42:56.357 INFO [10] org.openqa.selenium.server.SeleniumServer - Selenium server running in debug mode. 08:42:56.376 DEBUG [10] org.openqa.jetty.util.Container - add component: SocketListener0@0.0.0.0:4444 08:42:56.397 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.ResourceCache@39617189 08:42:56.401 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.ProxyHandler in HttpContext[/,/] 08:42:56.401 DEBUG [10] org.openqa.jetty.util.Container - add component: HttpContext[/,/] 08:42:56.402 DEBUG [10] org.openqa.jetty.http.HttpServer - Added HttpContext[/,/] for host * 08:42:56.403 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.ResourceCache@2d20cc56 08:42:56.404 DEBUG [10] org.openqa.jetty.http.HttpContext - added SC{BASIC,null,user,CONFIDENTIAL} at /org/openqa/selenium/tests/html/basicAuth/* 08:42:56.412 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.handler.SecurityHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.415 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.StaticContentHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.416 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.SessionExtensionJsHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.416 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.htmlrunner.SingleTestSuiteResourceHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.417 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.htmlrunner.SeleniumHTMLRunnerResultsHandler@56406199 08:42:56.417 DEBUG [10] org.openqa.jetty.util.Container - add component: HttpContext[/selenium-server,/selenium-server] 08:42:56.418 DEBUG [10] org.openqa.jetty.http.HttpServer - Added HttpContext[/selenium-server,/selenium-server] for host * 08:42:56.471 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.ResourceCache@1d10c424 08:42:56.487 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.SeleniumDriverResourceHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.488 DEBUG [10] org.openqa.jetty.util.Container - add component: HttpContext[/selenium-server/driver,/selenium-server/driver] 08:42:56.488 DEBUG [10] org.openqa.jetty.http.HttpServer - Added HttpContext[/selenium-server/driver,/selenium-server/driver] for host * 08:42:56.488 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.ResourceCache@5b40c281 08:42:56.501 DEBUG [10] org.openqa.jetty.util.Container - add component: WebDriver remote server 08:42:56.506 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.jetty.servlet.HashSessionManager@7df17e77 08:42:56.506 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.jetty.servlet.ServletHandler@79a5f739 08:42:56.507 INFO [10] org.openqa.selenium.server.SeleniumServer - RemoteWebDriver instances should connect to: [URL] 08:42:56.507 DEBUG [10] org.openqa.jetty.util.Container - add component: HttpContext[/wd,/wd] 08:42:56.508 DEBUG [10] org.openqa.jetty.http.HttpServer - Added HttpContext[/wd,/wd] for host * 08:42:56.508 DEBUG [10] org.openqa.jetty.util.Container - Starting org.openqa.jetty.jetty.Server@252f0999 08:42:56.509 INFO [10] org.openqa.jetty.http.HttpServer - Version Jetty/5.1.x 08:42:56.509 DEBUG [10] org.openqa.jetty.http.HttpServer - LISTENERS: [SocketListener0@0.0.0.0:4444] 08:42:56.509 DEBUG [10] org.openqa.jetty.http.HttpServer - HANDLER: {null={/selenium-server/driver/*=[HttpContext[/selenium-server/driver,/selenium-server/driver]], /selenium-server/*=[HttpContext[/selenium-server,/selenium-server]], /=[HttpContext[/,/]], /wd/*=[HttpContext[/wd,/wd]]}} 08:42:56.510 DEBUG [10] org.openqa.jetty.util.Container - Starting HttpContext[/selenium-server/driver,/selenium-server/driver] 08:42:56.510 DEBUG [10] org.openqa.jetty.http.HttpContext - Init classloader from null, sun.misc.Launcher$AppClassLoader@4aad3ba4 for HttpContext[/selenium-server/driver,/selenium-server/driver] 08:42:56.510 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/selenium-server/driver,/selenium-server/driver] 08:42:56.510 DEBUG [10] org.openqa.jetty.util.Container - Starting HttpContext[/selenium-server,/selenium-server] 08:42:56.510 DEBUG [10] org.openqa.jetty.http.HttpContext - Init classloader from null, sun.misc.Launcher$AppClassLoader@4aad3ba4 for HttpContext[/selenium-server,/selenium-server] 08:42:56.511 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.jetty.http.handler.SecurityHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.511 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.StaticContentHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.511 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.SessionExtensionJsHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.511 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.htmlrunner.SingleTestSuiteResourceHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.512 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.SeleniumDriverResourceHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.512 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/selenium-server,/selenium-server] 08:42:56.520 DEBUG [10] org.openqa.jetty.util.Container - Starting HttpContext[/,/] 08:42:56.520 DEBUG [10] org.openqa.jetty.http.HttpContext - Init classloader from null, sun.misc.Launcher$AppClassLoader@4aad3ba4 for HttpContext[/,/] 08:42:56.520 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.ProxyHandler in HttpContext[/,/] 08:42:56.521 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/,/] 08:42:56.521 DEBUG [10] org.openqa.jetty.util.Container - Starting HttpContext[/wd,/wd] 08:42:56.521 DEBUG [10] org.openqa.jetty.http.HttpContext - Init classloader from null, sun.misc.Launcher$AppClassLoader@4aad3ba4 for HttpContext[/wd,/wd] 08:42:56.521 DEBUG [10] org.openqa.jetty.util.Container - Starting org.openqa.jetty.jetty.servlet.ServletHandler@79a5f739 08:42:56.521 DEBUG [10] org.openqa.jetty.jetty.servlet.AbstractSessionManager - New random session seed 08:43:07.962 DEBUG [10] org.openqa.jetty.jetty.servlet.Holder - Started holder of class org.openqa.selenium.remote.server.DriverServlet 08:43:07.962 DEBUG [11] org.openqa.jetty.jetty.servlet.AbstractSessionManager - Session scavenger period = 30s 08:43:07.962 INFO [10] org.openqa.jetty.util.Container - Started org.openqa.jetty.jetty.servlet.ServletHandler@79a5f739 08:43:07.962 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/wd,/wd] 08:43:07.970 INFO [10] org.openqa.jetty.http.SocketListener - Started SocketListener on 0.0.0.0:4444 08:43:07.970 INFO [10] org.openqa.jetty.util.Container - Started org.openqa.jetty.jetty.Server@252f0999 08:43:07.983 DEBUG [10] org.openqa.selenium.server.browserlaunchers.BrowserLauncherFactory - Requested browser string '*firefox' matches *firefox 08:43:07.984 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.CombinedFirefoxLocator - Discovering Firefox 2... 08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Discovering Firefox 2... 08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/Applications/Minefield.app/Contents/MacOS/firefox-bin' is valid... 08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/Applications/Firefox-2.app/Contents/MacOS/firefox-bin' is valid... 08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/Applications/Firefox.app/Contents/MacOS/firefox-bin' is valid... 08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/usr/lib/firefox/firefox-bin' is valid... 08:43:08.008 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/usr/bin/firefox-bin' is valid... 08:43:08.010 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Discovered valid Firefox 2 launcher : '/usr/bin/firefox-bin' 08:43:08.351 DEBUG [10] org.openqa.selenium.server.browserlaunchers.ResourceExtractor - Extracting /customProfileDirCUSTFFCHROME to /tmp/customProfileDir987977 08:43:08.432 INFO [10] org.openqa.selenium.server.browserlaunchers.FirefoxChromeLauncher - Preparing Firefox profile... 08:43:08.984 INFO [10] org.openqa.selenium.server.browserlaunchers.FirefoxChromeLauncher - Launching Firefox... 08:43:09.988 INFO [12] org.openqa.selenium.server.SeleniumServer - Shutting down... Any ideas on where to start looking, or any fixes?
Selenium stalls at &quot;Launching Firefox...&quot;, no errors or exceptions Trying to run Selenium on our RedHat box remotely just stays at "Launching Firefox..." without any error messages to go on. I have a symlink from /usr/bin/firefox that goes to /usr/lib64/firefox/firefox. The RedHat machine has Firefox ESR 17.0.6 installed. I'm using Xming and running Firefox by just typing "firefox" in the terminal works fine. I tried running Selenium through Xvfb, but it hangs at the same place (Xvfb verified working generally with "firefox &" and taking a screenshot). The below is the terminal input and output (anonymized): [user@redhat selenium-test]$ java -jar selenium-server-standalone.jar -trustAllSSLCertificates -htmlSuite "*firefox" [URL] suite_FILE.html tmp_results-FILE.html -firefoxProfileTemplate "/home/user/.mozilla/firefox/wwjnyifu.Selenium" Jun 25, 2013 2:51:41 PM org.openqa.grid.selenium.GridLauncher main INFO: Launching a standalone server 14:51:41.817 INFO - Java: Sun Microsystems Inc. 20.12-b01 14:51:41.818 INFO - OS: Linux 2.6.32-279.el6.x86_64 amd64 14:51:41.836 INFO - v2.33.0, with Core v2.33.0. Built from revision 4e90c97 14:51:41.981 INFO - RemoteWebDriver instances should connect to: [URL] 14:51:41.982 INFO - Version Jetty/5.1.x 14:51:41.983 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver] 14:51:41.983 INFO - Started HttpContext[/selenium-server,/selenium-server] 14:51:41.984 INFO - Started HttpContext[/,/] 14:51:52.538 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler@c0b76fa 14:51:52.538 INFO - Started HttpContext[/wd,/wd] 14:51:52.546 INFO - Started SocketListener on 0.0.0.0:4444 14:51:52.546 INFO - Started org.openqa.jetty.jetty.Server@b34bed0 jar:file:/home/user/selenium-test/selenium-server-standalone.jar!/customProfileDirCUSTFFCHROME 14:51:52.791 INFO - Preparing Firefox profile... 14:51:53.343 INFO - Launching Firefox... ^C15:03:18.657 INFO - Shutting down... I gave it almost 10 minutes before pressing CTRL+C. With debugging, not much more to go on: 08:40:37.183 INFO [10] org.openqa.grid.selenium.GridLauncher - Launching a standalone server 08:40:37.243 INFO [10] org.openqa.selenium.server.SeleniumServer - Writing debug logs to selenium.log 08:40:37.243 INFO [10] org.openqa.selenium.server.SeleniumServer - Java: Sun Microsystems Inc. 20.12-b01 08:40:37.243 INFO [10] org.openqa.selenium.server.SeleniumServer - OS: Linux 2.6.32-279.el6.x86_64 amd64 08:40:37.259 INFO [10] org.openqa.selenium.server.SeleniumServer - v2.33.0, with Core v2.33.0. Built from revision 4e90c97 08:40:37.420 INFO [10] org.openqa.selenium.server.SeleniumServer - RemoteWebDriver instances should connect to: [URL] 08:40:37.421 INFO [10] org.openqa.jetty.http.HttpServer - Version Jetty/5.1.x 08:40:37.422 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/selenium-server/driver,/selenium-server/driver] 08:40:37.423 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/selenium-server,/selenium-server] 08:40:37.423 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/,/] 08:40:37.439 INFO [10] org.openqa.jetty.util.Container - Started org.openqa.jetty.jetty.servlet.ServletHandler@851052d 08:40:37.439 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/wd,/wd] 08:40:37.444 INFO [10] org.openqa.jetty.http.SocketListener - Started SocketListener on 0.0.0.0:4444 08:40:37.445 INFO [10] org.openqa.jetty.util.Container - Started org.openqa.jetty.jetty.Server@252f0999 08:40:37.737 INFO [10] org.openqa.selenium.server.browserlaunchers.FirefoxChromeLauncher - Preparing Firefox profile... 08:40:38.289 INFO [10] org.openqa.selenium.server.browserlaunchers.FirefoxChromeLauncher - Launching Firefox... 08:42:56.271 INFO [10] org.openqa.grid.selenium.GridLauncher - Launching a standalone server 08:42:56.335 INFO [10] org.openqa.selenium.server.SeleniumServer - Writing debug logs to selenium.log 08:42:56.336 INFO [10] org.openqa.selenium.server.SeleniumServer - Java: Sun Microsystems Inc. 20.12-b01 08:42:56.336 INFO [10] org.openqa.selenium.server.SeleniumServer - OS: Linux 2.6.32-279.el6.x86_64 amd64 08:42:56.356 INFO [10] org.openqa.selenium.server.SeleniumServer - v2.33.0, with Core v2.33.0. Built from revision 4e90c97 08:42:56.357 INFO [10] org.openqa.selenium.server.SeleniumServer - Selenium server running in debug mode. 08:42:56.376 DEBUG [10] org.openqa.jetty.util.Container - add component: SocketListener0@0.0.0.0:4444 08:42:56.397 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.ResourceCache@39617189 08:42:56.401 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.ProxyHandler in HttpContext[/,/] 08:42:56.401 DEBUG [10] org.openqa.jetty.util.Container - add component: HttpContext[/,/] 08:42:56.402 DEBUG [10] org.openqa.jetty.http.HttpServer - Added HttpContext[/,/] for host * 08:42:56.403 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.ResourceCache@2d20cc56 08:42:56.404 DEBUG [10] org.openqa.jetty.http.HttpContext - added SC{BASIC,null,user,CONFIDENTIAL} at /org/openqa/selenium/tests/html/basicAuth/* 08:42:56.412 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.handler.SecurityHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.415 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.StaticContentHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.416 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.SessionExtensionJsHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.416 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.htmlrunner.SingleTestSuiteResourceHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.417 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.htmlrunner.SeleniumHTMLRunnerResultsHandler@56406199 08:42:56.417 DEBUG [10] org.openqa.jetty.util.Container - add component: HttpContext[/selenium-server,/selenium-server] 08:42:56.418 DEBUG [10] org.openqa.jetty.http.HttpServer - Added HttpContext[/selenium-server,/selenium-server] for host * 08:42:56.471 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.ResourceCache@1d10c424 08:42:56.487 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.selenium.server.SeleniumDriverResourceHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.488 DEBUG [10] org.openqa.jetty.util.Container - add component: HttpContext[/selenium-server/driver,/selenium-server/driver] 08:42:56.488 DEBUG [10] org.openqa.jetty.http.HttpServer - Added HttpContext[/selenium-server/driver,/selenium-server/driver] for host * 08:42:56.488 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.http.ResourceCache@5b40c281 08:42:56.501 DEBUG [10] org.openqa.jetty.util.Container - add component: WebDriver remote server 08:42:56.506 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.jetty.servlet.HashSessionManager@7df17e77 08:42:56.506 DEBUG [10] org.openqa.jetty.util.Container - add component: org.openqa.jetty.jetty.servlet.ServletHandler@79a5f739 08:42:56.507 INFO [10] org.openqa.selenium.server.SeleniumServer - RemoteWebDriver instances should connect to: [URL] 08:42:56.507 DEBUG [10] org.openqa.jetty.util.Container - add component: HttpContext[/wd,/wd] 08:42:56.508 DEBUG [10] org.openqa.jetty.http.HttpServer - Added HttpContext[/wd,/wd] for host * 08:42:56.508 DEBUG [10] org.openqa.jetty.util.Container - Starting org.openqa.jetty.jetty.Server@252f0999 08:42:56.509 INFO [10] org.openqa.jetty.http.HttpServer - Version Jetty/5.1.x 08:42:56.509 DEBUG [10] org.openqa.jetty.http.HttpServer - LISTENERS: [SocketListener0@0.0.0.0:4444] 08:42:56.509 DEBUG [10] org.openqa.jetty.http.HttpServer - HANDLER: {null={/selenium-server/driver/*=[HttpContext[/selenium-server/driver,/selenium-server/driver]], /selenium-server/*=[HttpContext[/selenium-server,/selenium-server]], /=[HttpContext[/,/]], /wd/*=[HttpContext[/wd,/wd]]}} 08:42:56.510 DEBUG [10] org.openqa.jetty.util.Container - Starting HttpContext[/selenium-server/driver,/selenium-server/driver] 08:42:56.510 DEBUG [10] org.openqa.jetty.http.HttpContext - Init classloader from null, sun.misc.Launcher$AppClassLoader@4aad3ba4 for HttpContext[/selenium-server/driver,/selenium-server/driver] 08:42:56.510 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/selenium-server/driver,/selenium-server/driver] 08:42:56.510 DEBUG [10] org.openqa.jetty.util.Container - Starting HttpContext[/selenium-server,/selenium-server] 08:42:56.510 DEBUG [10] org.openqa.jetty.http.HttpContext - Init classloader from null, sun.misc.Launcher$AppClassLoader@4aad3ba4 for HttpContext[/selenium-server,/selenium-server] 08:42:56.511 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.jetty.http.handler.SecurityHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.511 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.StaticContentHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.511 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.SessionExtensionJsHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.511 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.htmlrunner.SingleTestSuiteResourceHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.512 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.SeleniumDriverResourceHandler in HttpContext[/selenium-server,/selenium-server] 08:42:56.512 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/selenium-server,/selenium-server] 08:42:56.520 DEBUG [10] org.openqa.jetty.util.Container - Starting HttpContext[/,/] 08:42:56.520 DEBUG [10] org.openqa.jetty.http.HttpContext - Init classloader from null, sun.misc.Launcher$AppClassLoader@4aad3ba4 for HttpContext[/,/] 08:42:56.520 DEBUG [10] org.openqa.jetty.http.handler.AbstractHttpHandler - Started org.openqa.selenium.server.ProxyHandler in HttpContext[/,/] 08:42:56.521 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/,/] 08:42:56.521 DEBUG [10] org.openqa.jetty.util.Container - Starting HttpContext[/wd,/wd] 08:42:56.521 DEBUG [10] org.openqa.jetty.http.HttpContext - Init classloader from null, sun.misc.Launcher$AppClassLoader@4aad3ba4 for HttpContext[/wd,/wd] 08:42:56.521 DEBUG [10] org.openqa.jetty.util.Container - Starting org.openqa.jetty.jetty.servlet.ServletHandler@79a5f739 08:42:56.521 DEBUG [10] org.openqa.jetty.jetty.servlet.AbstractSessionManager - New random session seed 08:43:07.962 DEBUG [10] org.openqa.jetty.jetty.servlet.Holder - Started holder of class org.openqa.selenium.remote.server.DriverServlet 08:43:07.962 DEBUG [11] org.openqa.jetty.jetty.servlet.AbstractSessionManager - Session scavenger period = 30s 08:43:07.962 INFO [10] org.openqa.jetty.util.Container - Started org.openqa.jetty.jetty.servlet.ServletHandler@79a5f739 08:43:07.962 INFO [10] org.openqa.jetty.util.Container - Started HttpContext[/wd,/wd] 08:43:07.970 INFO [10] org.openqa.jetty.http.SocketListener - Started SocketListener on 0.0.0.0:4444 08:43:07.970 INFO [10] org.openqa.jetty.util.Container - Started org.openqa.jetty.jetty.Server@252f0999 08:43:07.983 DEBUG [10] org.openqa.selenium.server.browserlaunchers.BrowserLauncherFactory - Requested browser string '*firefox' matches *firefox 08:43:07.984 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.CombinedFirefoxLocator - Discovering Firefox 2... 08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Discovering Firefox 2... 08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/Applications/Minefield.app/Contents/MacOS/firefox-bin' is valid... 08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/Applications/Firefox-2.app/Contents/MacOS/firefox-bin' is valid... 08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/Applications/Firefox.app/Contents/MacOS/firefox-bin' is valid... 08:43:07.990 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/usr/lib/firefox/firefox-bin' is valid... 08:43:08.008 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Checking whether Firefox 2 launcher at :'/usr/bin/firefox-bin' is valid... 08:43:08.010 DEBUG [10] org.openqa.selenium.browserlaunchers.locators.BrowserLocator - Discovered valid Firefox 2 launcher : '/usr/bin/firefox-bin' 08:43:08.351 DEBUG [10] org.openqa.selenium.server.browserlaunchers.ResourceExtractor - Extracting /customProfileDirCUSTFFCHROME to /tmp/customProfileDir987977 08:43:08.432 INFO [10] org.openqa.selenium.server.browserlaunchers.FirefoxChromeLauncher - Preparing Firefox profile... 08:43:08.984 INFO [10] org.openqa.selenium.server.browserlaunchers.FirefoxChromeLauncher - Launching Firefox... 08:43:09.988 INFO [12] org.openqa.selenium.server.SeleniumServer - Shutting down... Any ideas on where to start looking, or any fixes?
firefox, selenium, redhat
5
2,842
2
https://stackoverflow.com/questions/17298725/selenium-stalls-at-launching-firefox-no-errors-or-exceptions
28,258,135
Manually patching for Ghost vulnerability on legacy server
I have a legacy Redhat ES 3.x server (that I cannot put a later distro on due to limitations in an ancient, unsupported application) and I am trying to manually patch glibc on it for the Ghost vulnerability. Based on the analysis by Qualys ( [URL] ), it appears that it should be easy to modify the glib source to handle the stack/heap overflow issue. But I would like to have a few more eyes on my procedure to see if I missed something, etc. Here is what I have done. First I built & prepped the glib source tree from the SRPM: rpm -ivh glibc-2.3.2-95.50.src.rpm rpmbuild -bp /usr/src/redhat/SPECS/glibc.spec cd /usr/src/redhat/BUILD cp -av glibc-2.3.2-200309260658 glibc-org cd glibc-2.3.2-200309260658 Next, I edited nss/digits_dots.c mainly based on this paragraph from the Qalys article above: Lines 121-125 prepare pointers to store four (4) distinct entities in buffer: host_addr, h_addr_ptrs, h_alias_ptr, and hostname. The sizeof (*h_alias_ptr) -- the size of a char pointer -- is missing from the computation of size_needed. vi nss/digits_dots.c I edited these two statements: 105: size_needed = (sizeof (*host_addr) + sizeof (*h_addr_ptrs) + strlen (name) + 1); 277: size_needed = (sizeof (*host_addr) + sizeof (*h_addr_ptrs) + strlen (name) + 1); to this: 105: size_needed = (sizeof (*host_addr) + sizeof (*h_addr_ptrs) + strlen (name) + sizeof (*h_alias_ptr) + 1); 277: size_needed = (sizeof (*host_addr) + sizeof (*h_addr_ptrs) + strlen (name) + sizeof (*h_alias_ptr) + 1); Next, I created a patch file + updated the spec file to include my patch + built binaries: cd /usr/src/redhat/BUILD diff -Npru glibc-org glibc-2.3.2-200309260658 > glibc-digit_dots-ghost.patch cp glibc-digit_dots-ghost.patch ../SOURCES/ cd /usr/src/redhat/SPECS vi glibc.spec rpmbuild -ba glibc.spec Lastly, I updated glibc using the new binaries (RPM): cd /usr/src/redhat/RPMS/i386 rpm -Uvh --nodeps glibc-2.3.2-95.51.i386.rpm glibc-devel-2.3.2-95.51.i386.rpm glibc-profile-2.3.2-95.51.i386.rpm glibc-utils-2.3.2-95.51.i386.rpm glibc-common-2.3.2-95.51.i386.rpm glibc-headers-2.3.2-95.51.i386.rpm After restarting the server, I re-ran the ghost tester ( [URL] ). This time I got "should not happen" instead of "vulnerable", which I guess is good. But I had expected to get "not vulnerable" Did I miss something, or is it just that my fix is different from the official fix in the supported distros?
Manually patching for Ghost vulnerability on legacy server I have a legacy Redhat ES 3.x server (that I cannot put a later distro on due to limitations in an ancient, unsupported application) and I am trying to manually patch glibc on it for the Ghost vulnerability. Based on the analysis by Qualys ( [URL] ), it appears that it should be easy to modify the glib source to handle the stack/heap overflow issue. But I would like to have a few more eyes on my procedure to see if I missed something, etc. Here is what I have done. First I built & prepped the glib source tree from the SRPM: rpm -ivh glibc-2.3.2-95.50.src.rpm rpmbuild -bp /usr/src/redhat/SPECS/glibc.spec cd /usr/src/redhat/BUILD cp -av glibc-2.3.2-200309260658 glibc-org cd glibc-2.3.2-200309260658 Next, I edited nss/digits_dots.c mainly based on this paragraph from the Qalys article above: Lines 121-125 prepare pointers to store four (4) distinct entities in buffer: host_addr, h_addr_ptrs, h_alias_ptr, and hostname. The sizeof (*h_alias_ptr) -- the size of a char pointer -- is missing from the computation of size_needed. vi nss/digits_dots.c I edited these two statements: 105: size_needed = (sizeof (*host_addr) + sizeof (*h_addr_ptrs) + strlen (name) + 1); 277: size_needed = (sizeof (*host_addr) + sizeof (*h_addr_ptrs) + strlen (name) + 1); to this: 105: size_needed = (sizeof (*host_addr) + sizeof (*h_addr_ptrs) + strlen (name) + sizeof (*h_alias_ptr) + 1); 277: size_needed = (sizeof (*host_addr) + sizeof (*h_addr_ptrs) + strlen (name) + sizeof (*h_alias_ptr) + 1); Next, I created a patch file + updated the spec file to include my patch + built binaries: cd /usr/src/redhat/BUILD diff -Npru glibc-org glibc-2.3.2-200309260658 > glibc-digit_dots-ghost.patch cp glibc-digit_dots-ghost.patch ../SOURCES/ cd /usr/src/redhat/SPECS vi glibc.spec rpmbuild -ba glibc.spec Lastly, I updated glibc using the new binaries (RPM): cd /usr/src/redhat/RPMS/i386 rpm -Uvh --nodeps glibc-2.3.2-95.51.i386.rpm glibc-devel-2.3.2-95.51.i386.rpm glibc-profile-2.3.2-95.51.i386.rpm glibc-utils-2.3.2-95.51.i386.rpm glibc-common-2.3.2-95.51.i386.rpm glibc-headers-2.3.2-95.51.i386.rpm After restarting the server, I re-ran the ghost tester ( [URL] ). This time I got "should not happen" instead of "vulnerable", which I guess is good. But I had expected to get "not vulnerable" Did I miss something, or is it just that my fix is different from the official fix in the supported distros?
c, linux, security, centos, redhat
5
1,351
1
https://stackoverflow.com/questions/28258135/manually-patching-for-ghost-vulnerability-on-legacy-server
20,740,021
File listed twice in rpm spec file
The files section of my spec-file looks like this: %files %{prefix}/htdocs/ %config %{prefix}/htdocs/share/settings/config.inc.php Now, since the config file is already included in the %{prefix}/htdocs/ line I get the warning 'File listed twice'. One way around would be, to list every single file within %{prefix}/htdocs/ , except the config file. But my question is: Is there a better way around this issue, than listing all files?
File listed twice in rpm spec file The files section of my spec-file looks like this: %files %{prefix}/htdocs/ %config %{prefix}/htdocs/share/settings/config.inc.php Now, since the config file is already included in the %{prefix}/htdocs/ line I get the warning 'File listed twice'. One way around would be, to list every single file within %{prefix}/htdocs/ , except the config file. But my question is: Is there a better way around this issue, than listing all files?
linux, redhat, rpm, rpm-spec
5
4,274
1
https://stackoverflow.com/questions/20740021/file-listed-twice-in-rpm-spec-file
12,095,016
Jenkins is getting permission denied errors when running maven
I'm running Jenkins on a redhat linux box. My build is a maven 2.2.1 project that contains selenium tests. I've got the same setup on a ubuntu box which works fine, but when I attempt to invoke the same top-level maven command on my redhat VM I get the following error. org.apache.maven.surefire.booter.SurefireExecutionException: Unable to create file for report: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied); nested exception is java.io.FileNotFoundException: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied); nested exception is org.apache.maven.surefire.report.ReporterException: Unable to create file for report: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied); nested exception is java.io.FileNotFoundException: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied) org.apache.maven.surefire.report.ReporterException: Unable to create file for report: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied); nested exception is java.io.FileNotFoundException: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied) java.io.FileNotFoundException: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:212) at java.io.FileOutputStream.<init>(FileOutputStream.java:165) at java.io.FileWriter.<init>(FileWriter.java:90) at org.apache.maven.surefire.report.AbstractFileReporter.testSetStarting(AbstractFileReporter.java:57) at org.apache.maven.surefire.report.ReporterManager.testSetStarting(ReporterManager.java:219) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:138) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:127) at org.apache.maven.surefire.Surefire.run(Surefire.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:345) at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1009) [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] There are test failures. In attempting to solve this problem I've restart Jenkins sudo service jenkins restart but it persists. Anyone run into this before?
Jenkins is getting permission denied errors when running maven I'm running Jenkins on a redhat linux box. My build is a maven 2.2.1 project that contains selenium tests. I've got the same setup on a ubuntu box which works fine, but when I attempt to invoke the same top-level maven command on my redhat VM I get the following error. org.apache.maven.surefire.booter.SurefireExecutionException: Unable to create file for report: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied); nested exception is java.io.FileNotFoundException: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied); nested exception is org.apache.maven.surefire.report.ReporterException: Unable to create file for report: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied); nested exception is java.io.FileNotFoundException: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied) org.apache.maven.surefire.report.ReporterException: Unable to create file for report: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied); nested exception is java.io.FileNotFoundException: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied) java.io.FileNotFoundException: /var/lib/jenkins/jobs/selenium/workspace/target/surefire-reports/com.MyComp.bio.PreferencesTest.txt (Permission denied) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:212) at java.io.FileOutputStream.<init>(FileOutputStream.java:165) at java.io.FileWriter.<init>(FileWriter.java:90) at org.apache.maven.surefire.report.AbstractFileReporter.testSetStarting(AbstractFileReporter.java:57) at org.apache.maven.surefire.report.ReporterManager.testSetStarting(ReporterManager.java:219) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:138) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:127) at org.apache.maven.surefire.Surefire.run(Surefire.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:345) at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1009) [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] There are test failures. In attempting to solve this problem I've restart Jenkins sudo service jenkins restart but it persists. Anyone run into this before?
tomcat, maven-2, jenkins, redhat, selenium-webdriver
5
16,662
1
https://stackoverflow.com/questions/12095016/jenkins-is-getting-permission-denied-errors-when-running-maven
65,527,456
Add Custom Header to HTTP request in Load Balancer
I have an containerized application/service deployed in openshift container platform with istio service mesh. In istio virtual service yaml, i wanted to validate if the http request is having a header(for ex: version) and with value v1. i have added below config in virtual service yaml which validates header. But i am looking for available options to inject this header in HTTP request using loadbalancer/ingress/openshif route etc. As my istio-ingressgateway service is deployed with ClusterIp. i have used openshift route to send the external traffic to ingressgateway. please share the possible ways to add headers to http request http: - match: - headers: # Match header version: # header that we decided for dark release exact: v1 # exact match
Add Custom Header to HTTP request in Load Balancer I have an containerized application/service deployed in openshift container platform with istio service mesh. In istio virtual service yaml, i wanted to validate if the http request is having a header(for ex: version) and with value v1. i have added below config in virtual service yaml which validates header. But i am looking for available options to inject this header in HTTP request using loadbalancer/ingress/openshif route etc. As my istio-ingressgateway service is deployed with ClusterIp. i have used openshift route to send the external traffic to ingressgateway. please share the possible ways to add headers to http request http: - match: - headers: # Match header version: # header that we decided for dark release exact: v1 # exact match
routes, openshift, redhat, istio, servicemesh
5
4,210
3
https://stackoverflow.com/questions/65527456/add-custom-header-to-http-request-in-load-balancer
54,062,291
How to change listen_addresses to * from localhost in postgres?
I want to change the listen_addresses of my postgres database to * I am using the command ALTER SYSTEM SET listen_addresses TO '*'; But the value of the parameter is not changing to * . Am I doing anything wrong with the command? Currently my configuration in postgresql.conf is overwritten by postgresql.auto.conf .
How to change listen_addresses to * from localhost in postgres? I want to change the listen_addresses of my postgres database to * I am using the command ALTER SYSTEM SET listen_addresses TO '*'; But the value of the parameter is not changing to * . Am I doing anything wrong with the command? Currently my configuration in postgresql.conf is overwritten by postgresql.auto.conf .
postgresql, redhat
5
7,885
1
https://stackoverflow.com/questions/54062291/how-to-change-listen-addresses-to-from-localhost-in-postgres
28,775,371
unexpected std::io_base::failure exception
Take this simple program: #include <fstream> int main() { std::ifstream in("."); int x; if (in) in >> x; } on Redhat 6, gcc 4.4.7 this runs without error on Ubuntu 14.04 LTS , gcc 4.8.2 this runs without error on Redhat 7, gcc 4.8.2 I get: terminate called after throwing an instance of 'std::ios_base::failure' what(): basic_filebuf::underflow error reading the file Aborted (cored dumped) I think this is related to: [URL] However, then I don't understand why it works on Ubuntu. Ideas?
unexpected std::io_base::failure exception Take this simple program: #include <fstream> int main() { std::ifstream in("."); int x; if (in) in >> x; } on Redhat 6, gcc 4.4.7 this runs without error on Ubuntu 14.04 LTS , gcc 4.8.2 this runs without error on Redhat 7, gcc 4.8.2 I get: terminate called after throwing an instance of 'std::ios_base::failure' what(): basic_filebuf::underflow error reading the file Aborted (cored dumped) I think this is related to: [URL] However, then I don't understand why it works on Ubuntu. Ideas?
c++, gcc, redhat
5
1,243
2
https://stackoverflow.com/questions/28775371/unexpected-stdio-basefailure-exception
62,063,840
Docker base image having telnet and ping
I have to debug network issues at a docker container. The container was built using "FROM registry.access.redhat.com/ubi7/ubi-minimal" It has no "telnet" or "ping" like a normal shell has. That was by design in order to save space. I tried to install them through yum within docker container shell – yum is not available They used something called “microdnf” which is like yum Tried “bash-4.2# microdnf install iputils” - No package matches 'iputils'. Similar result for telnet Tried running it inside the dockerfile, where the image is created. It seem to be getting installed – but image creation explodes" “The command '/bin/sh -c yum install iputils' returned a non-zero code: 1” I changed the image base from “FROM registry.access.redhat.com/ubi7/ubi-minimal” to “FROM registry.access.redhat.com/ubi7/ubi” This has yum available. “yum install iputils” from container shell, and from docker file failed the same way. Is there an image (preferably redhat) that will let me use "ping" and will process my Dockerfile correctly? FROM registry.access.redhat.com/ubi7/ubi-minimal RUN microdnf update -y && rm -rf /var/cache/yum RUN microdnf clean all RUN microdnf install shadow-utils # Create a group and user RUN groupadd -r myapp && useradd -r myapp -g myapp RUN useradd -r aspisc -g myapp RUN mkdir -p /opt/smyapp/config RUN mkdir -p /opt/smyapp/logs RUN chown -R myapp:smyapp /opt/myapp RUN mkdir -p /opt/myapp/bin && mkdir -p /opt/myapp/libs RUN mkdir -p /opt/jre/ ENV JAVA_LIBS_CP /opt/myapp/libs ENV LD_LIBRARY_PATH=/lib64 RUN echo JAVA_LIBS_CP=${JAVA_LIBS_CP} EXPOSE 9500 EXPOSE 9501 ENTRYPOINT ["sh", "-c", "/opt/jre/bin/java $JAVA_OPTS -cp /opt/smyapp/bin/*:$JAVA_LIBS_CP/*...."]
Docker base image having telnet and ping I have to debug network issues at a docker container. The container was built using "FROM registry.access.redhat.com/ubi7/ubi-minimal" It has no "telnet" or "ping" like a normal shell has. That was by design in order to save space. I tried to install them through yum within docker container shell – yum is not available They used something called “microdnf” which is like yum Tried “bash-4.2# microdnf install iputils” - No package matches 'iputils'. Similar result for telnet Tried running it inside the dockerfile, where the image is created. It seem to be getting installed – but image creation explodes" “The command '/bin/sh -c yum install iputils' returned a non-zero code: 1” I changed the image base from “FROM registry.access.redhat.com/ubi7/ubi-minimal” to “FROM registry.access.redhat.com/ubi7/ubi” This has yum available. “yum install iputils” from container shell, and from docker file failed the same way. Is there an image (preferably redhat) that will let me use "ping" and will process my Dockerfile correctly? FROM registry.access.redhat.com/ubi7/ubi-minimal RUN microdnf update -y && rm -rf /var/cache/yum RUN microdnf clean all RUN microdnf install shadow-utils # Create a group and user RUN groupadd -r myapp && useradd -r myapp -g myapp RUN useradd -r aspisc -g myapp RUN mkdir -p /opt/smyapp/config RUN mkdir -p /opt/smyapp/logs RUN chown -R myapp:smyapp /opt/myapp RUN mkdir -p /opt/myapp/bin && mkdir -p /opt/myapp/libs RUN mkdir -p /opt/jre/ ENV JAVA_LIBS_CP /opt/myapp/libs ENV LD_LIBRARY_PATH=/lib64 RUN echo JAVA_LIBS_CP=${JAVA_LIBS_CP} EXPOSE 9500 EXPOSE 9501 ENTRYPOINT ["sh", "-c", "/opt/jre/bin/java $JAVA_OPTS -cp /opt/smyapp/bin/*:$JAVA_LIBS_CP/*...."]
docker, redhat, ping, telnet
5
18,452
3
https://stackoverflow.com/questions/62063840/docker-base-image-having-telnet-and-ping
45,177,416
Running RStudio Server on Openshift Online
Openshift Online does not allow containers running processes as root for security reasons (see the corresponding question in their FAQ section). RStudio Server , on the other hand, requires root privileges for installation and certain operations. According to the RStudio Server admin guide : RStudio Server runs as the system root user during startup and then drops this privilege and runs as a more restricted user. RStudio Server then re-assumes root privilege for a brief instant when creating R sessions on behalf of users (the server needs to call setresuid when creating the R session, and this call requires root privilege). Under these circumstances, is it somehow possible to get an RStudio Server docker container running on Openshift Online?
Running RStudio Server on Openshift Online Openshift Online does not allow containers running processes as root for security reasons (see the corresponding question in their FAQ section). RStudio Server , on the other hand, requires root privileges for installation and certain operations. According to the RStudio Server admin guide : RStudio Server runs as the system root user during startup and then drops this privilege and runs as a more restricted user. RStudio Server then re-assumes root privilege for a brief instant when creating R sessions on behalf of users (the server needs to call setresuid when creating the R session, and this call requires root privilege). Under these circumstances, is it somehow possible to get an RStudio Server docker container running on Openshift Online?
docker, openshift, redhat, rstudio-server
5
745
1
https://stackoverflow.com/questions/45177416/running-rstudio-server-on-openshift-online
24,566,492
Redhat trying to use pip ImportError: No module named pip
I am trying to use pip on my Redhat system. I installed pip following the instructions here , but when I try to use it, for example pip install , I get the following error code: Traceback (most recent call last): File "/usr/local/bin/pip", line 7, in ? from pip import main ImportError: No module named pip
Redhat trying to use pip ImportError: No module named pip I am trying to use pip on my Redhat system. I installed pip following the instructions here , but when I try to use it, for example pip install , I get the following error code: Traceback (most recent call last): File "/usr/local/bin/pip", line 7, in ? from pip import main ImportError: No module named pip
python, pip, redhat
5
7,046
3
https://stackoverflow.com/questions/24566492/redhat-trying-to-use-pip-importerror-no-module-named-pip
11,273,712
is it safe to use Write Function in GNU C using multiple threads
write function calls from multiple threads to the same socket is it safe ? Do we wanted to add a syncronization among them? Will it cause fro problems like Application getting delayed write/read from the Network Layer to Application layer We are using GNU C++ libraries GCC 4 on Linux Redhat Enviornment This is a Server Side Process where There is only 1 Socket Connectivity between Server & Client Server & Client are on 2 diffent Machines Data is send from Server to Client Client to Server Problem 1-when Server Send Data to Client Side (Multiple Threads Write data to client Side through the Same Single Socket) But Data Writen from the some of the threads are not gone to client side it doesnot even gone to the network Layer of the same machine (Tcpdump does not have that data) Problem 2-when Client Send data to Server Data Send By Client is shown in the the server's TCPdump not received for the server application which is reading from the socket from a single thread usinga "read" & "select" functions in a loop We were unable to identify the pattern of occuring these Problems We think This happend when so many multiple threads are writing to Same socket We are not syncronizationed write function hoping that OS is handling the syncronization
is it safe to use Write Function in GNU C using multiple threads write function calls from multiple threads to the same socket is it safe ? Do we wanted to add a syncronization among them? Will it cause fro problems like Application getting delayed write/read from the Network Layer to Application layer We are using GNU C++ libraries GCC 4 on Linux Redhat Enviornment This is a Server Side Process where There is only 1 Socket Connectivity between Server & Client Server & Client are on 2 diffent Machines Data is send from Server to Client Client to Server Problem 1-when Server Send Data to Client Side (Multiple Threads Write data to client Side through the Same Single Socket) But Data Writen from the some of the threads are not gone to client side it doesnot even gone to the network Layer of the same machine (Tcpdump does not have that data) Problem 2-when Client Send data to Server Data Send By Client is shown in the the server's TCPdump not received for the server application which is reading from the socket from a single thread usinga "read" & "select" functions in a loop We were unable to identify the pattern of occuring these Problems We think This happend when so many multiple threads are writing to Same socket We are not syncronizationed write function hoping that OS is handling the syncronization
c++, multithreading, sockets, gcc, redhat
5
220
3
https://stackoverflow.com/questions/11273712/is-it-safe-to-use-write-function-in-gnu-c-using-multiple-threads