question_id
int64 82.3k
79.7M
| title_clean
stringlengths 15
158
| body_clean
stringlengths 62
28.5k
| full_text
stringlengths 95
28.5k
| tags
stringlengths 4
80
| score
int64 0
1.15k
| view_count
int64 22
1.62M
| answer_count
int64 0
30
| link
stringlengths 58
125
|
|---|---|---|---|---|---|---|---|---|
32,350,448
|
Is there an Ansible Playbook for provisioning an OS using ESX/i?
|
Is their a way to provision an OS(Centos/Redhat) when using a licensed VMware vSphere server (with ESX/i) using ansible?
|
Is there an Ansible Playbook for provisioning an OS using ESX/i? Is their a way to provision an OS(Centos/Redhat) when using a licensed VMware vSphere server (with ESX/i) using ansible?
|
centos, vmware, redhat, ansible
| 2
| 6,236
| 2
|
https://stackoverflow.com/questions/32350448/is-there-an-ansible-playbook-for-provisioning-an-os-using-esx-i
|
31,454,513
|
how to define OR logic for an RPM dependency
|
I'm creating an RPM and I need to check that a version of Java 8 is installed on the machine. The problem is that Oracle provides version-tied RPMs with names like jdk1.8.0_45 and Redhat provides RPMs with names like java-oracle-8 . I don't really care which one is installed, as long as one of them is installed, so how can I define OR condition logic on a Java 8? (Note this is for a RHEL5 or RHEL6 target, so new fangled features can't be used)
|
how to define OR logic for an RPM dependency I'm creating an RPM and I need to check that a version of Java 8 is installed on the machine. The problem is that Oracle provides version-tied RPMs with names like jdk1.8.0_45 and Redhat provides RPMs with names like java-oracle-8 . I don't really care which one is installed, as long as one of them is installed, so how can I define OR condition logic on a Java 8? (Note this is for a RHEL5 or RHEL6 target, so new fangled features can't be used)
|
redhat, rpm
| 2
| 432
| 2
|
https://stackoverflow.com/questions/31454513/how-to-define-or-logic-for-an-rpm-dependency
|
28,740,780
|
Checking root integrity via a script
|
Below is my script to check root path integrity, to ensure there is no vulnerability in PATH variable. #! /bin/bash if [ ""echo $PATH | /bin/grep :: "" != """" ]; then echo "Empty Directory in PATH (::)" fi if [ ""echo $PATH | /bin/grep :$"" != """" ]; then echo ""Trailing : in PATH"" fi p=echo $PATH | /bin/sed -e 's/::/:/' -e 's/:$//' -e 's/:/ /g' set -- $p while [ ""$1"" != """" ]; do if [ ""$1"" = ""."" ]; then echo ""PATH contains ."" shift continue fi if [ -d $1 ]; then dirperm=/bin/ls -ldH $1 | /bin/cut -f1 -d"" "" if [ echo $dirperm | /bin/cut -c6 != ""-"" ]; then echo ""Group Write permission set on directory $1"" fi if [ echo $dirperm | /bin/cut -c9 != ""-"" ]; then echo ""Other Write permission set on directory $1"" fi dirown=ls -ldH $1 | awk '{print $3}' if [ ""$dirown"" != ""root"" ] ; then echo $1 is not owned by root fi else echo $1 is not a directory fi shift done The script works fine for me, and shows all vulnerable paths defined in the PATH variable. I want to also automate the process of correctly setting the PATH variable based on the above result. Any quick method to do that. For example, on my Linux box, the script gives output as: /usr/bin/X11 is not a directory /root/bin is not a directory whereas my PATH variable have these defined,and so I want to have a delete mechanism, to remove them from PATH variable of root. lot of lengthy ideas coming in mind. But searching for a quick and "not so complex" method please.
|
Checking root integrity via a script Below is my script to check root path integrity, to ensure there is no vulnerability in PATH variable. #! /bin/bash if [ ""echo $PATH | /bin/grep :: "" != """" ]; then echo "Empty Directory in PATH (::)" fi if [ ""echo $PATH | /bin/grep :$"" != """" ]; then echo ""Trailing : in PATH"" fi p=echo $PATH | /bin/sed -e 's/::/:/' -e 's/:$//' -e 's/:/ /g' set -- $p while [ ""$1"" != """" ]; do if [ ""$1"" = ""."" ]; then echo ""PATH contains ."" shift continue fi if [ -d $1 ]; then dirperm=/bin/ls -ldH $1 | /bin/cut -f1 -d"" "" if [ echo $dirperm | /bin/cut -c6 != ""-"" ]; then echo ""Group Write permission set on directory $1"" fi if [ echo $dirperm | /bin/cut -c9 != ""-"" ]; then echo ""Other Write permission set on directory $1"" fi dirown=ls -ldH $1 | awk '{print $3}' if [ ""$dirown"" != ""root"" ] ; then echo $1 is not owned by root fi else echo $1 is not a directory fi shift done The script works fine for me, and shows all vulnerable paths defined in the PATH variable. I want to also automate the process of correctly setting the PATH variable based on the above result. Any quick method to do that. For example, on my Linux box, the script gives output as: /usr/bin/X11 is not a directory /root/bin is not a directory whereas my PATH variable have these defined,and so I want to have a delete mechanism, to remove them from PATH variable of root. lot of lengthy ideas coming in mind. But searching for a quick and "not so complex" method please.
|
linux, bash, scripting, redhat
| 2
| 5,653
| 4
|
https://stackoverflow.com/questions/28740780/checking-root-integrity-via-a-script
|
14,022,542
|
Uninstall a REPO [yum]
|
I am using a RedHat 6.3 system. I had an issues installing php-mcrypt Hence I updated by epel version to 6.5. yum update said (error: try check your path and try again) there was a firewall, so I disabled it I wanted to reinstall the repo so I deleted epel.repo and epel-testing.repo And tried to install it again, the following message shows up Message: Setting up Install Process Examining epel-release-6-5.noarch.rpm: epel-release-6-5.noarch epel-release-6-5.noarch.rpm: does not update installed package. Is there something I am missing ? Also when I try installing the repo via rpm rpm -i epel-release-6-5.noarch.rpm warning: epel-release-6-5.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY package epel-release-6-5.noarch is already installed
|
Uninstall a REPO [yum] I am using a RedHat 6.3 system. I had an issues installing php-mcrypt Hence I updated by epel version to 6.5. yum update said (error: try check your path and try again) there was a firewall, so I disabled it I wanted to reinstall the repo so I deleted epel.repo and epel-testing.repo And tried to install it again, the following message shows up Message: Setting up Install Process Examining epel-release-6-5.noarch.rpm: epel-release-6-5.noarch epel-release-6-5.noarch.rpm: does not update installed package. Is there something I am missing ? Also when I try installing the repo via rpm rpm -i epel-release-6-5.noarch.rpm warning: epel-release-6-5.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY package epel-release-6-5.noarch is already installed
|
redhat, yum
| 2
| 6,692
| 1
|
https://stackoverflow.com/questions/14022542/uninstall-a-repo-yum
|
8,932,035
|
How to install Mercurial on my Red Hat Linux machine without using "yum" command?
|
I have been trying to install Mercurial binary packages on my Red Hat Linux machine. According to the install instructions, I should type command "yum install mercurial". However, in order to use the "yum" command, I need to first register my system with RHN(Red Hat Network). I tried a couple of times but still failed to do so. Beside waiting for Red Hat representatives to help me register, I am wondering if there is some way for me to use "yum" command without registering to the RHN. Or even better, can someone tell me how to install Mercurial without using "yum" command ? I have used only "install" command to install it but it didn't work. It is really frustrating~ Thank you very much,
|
How to install Mercurial on my Red Hat Linux machine without using "yum" command? I have been trying to install Mercurial binary packages on my Red Hat Linux machine. According to the install instructions, I should type command "yum install mercurial". However, in order to use the "yum" command, I need to first register my system with RHN(Red Hat Network). I tried a couple of times but still failed to do so. Beside waiting for Red Hat representatives to help me register, I am wondering if there is some way for me to use "yum" command without registering to the RHN. Or even better, can someone tell me how to install Mercurial without using "yum" command ? I have used only "install" command to install it but it didn't work. It is really frustrating~ Thank you very much,
|
linux, mercurial, installation, redhat, yum
| 2
| 6,081
| 2
|
https://stackoverflow.com/questions/8932035/how-to-install-mercurial-on-my-red-hat-linux-machine-without-using-yum-command
|
8,791,385
|
What's the default install prefix on a linux distro?
|
Software packages with a configure script can specify this manually by adding --prefix: ./configure --prefix=/usr/local My question is, how do I find out what the system's default prefix is? Is there some command, even on just Redhat that will get you the default install prefix if none is specified?
|
What's the default install prefix on a linux distro? Software packages with a configure script can specify this manually by adding --prefix: ./configure --prefix=/usr/local My question is, how do I find out what the system's default prefix is? Is there some command, even on just Redhat that will get you the default install prefix if none is specified?
|
linux, redhat
| 2
| 5,111
| 2
|
https://stackoverflow.com/questions/8791385/whats-the-default-install-prefix-on-a-linux-distro
|
3,564,137
|
analysis of core file
|
I'm using Linux redhat 3, can someone explain how is that possible that i am able to analyze with gdb , a core dump generated in Linux redhat 5 ? not that i complaint :) but i need to be sure this will always work... ? EDIT: the shared libraries are the same version, so no worries about that, they are placed in a shaerd storage so it can be accessed from both linux 5 and linux 3. thanks.
|
analysis of core file I'm using Linux redhat 3, can someone explain how is that possible that i am able to analyze with gdb , a core dump generated in Linux redhat 5 ? not that i complaint :) but i need to be sure this will always work... ? EDIT: the shared libraries are the same version, so no worries about that, they are placed in a shaerd storage so it can be accessed from both linux 5 and linux 3. thanks.
|
c++, c, linux, gdb, redhat
| 2
| 2,236
| 5
|
https://stackoverflow.com/questions/3564137/analysis-of-core-file
|
216,102
|
Installing GCC 3.4.6 in RHEL4
|
I do the following in command line: 1) wget ftp://mirrors.kernel.org/gnu/gcc/gcc-3.4.6/gcc-3.4.6.tar.bz2 2) tar -jxf gcc-3.4.6.tar.bz2 3) cd gcc-3.4.6 4) cd libstdc++-v3 5) ./configure And I get the following error: configure: error: cannot find install-sh or install.sh in ./../.. There is actually an "install-sh" file in the gcc-3.4.6 directory, but that's one directory up the current, not two. The configure script should look for install-sh in "./.." insted of "./../.." ?? What's wrong??
|
Installing GCC 3.4.6 in RHEL4 I do the following in command line: 1) wget ftp://mirrors.kernel.org/gnu/gcc/gcc-3.4.6/gcc-3.4.6.tar.bz2 2) tar -jxf gcc-3.4.6.tar.bz2 3) cd gcc-3.4.6 4) cd libstdc++-v3 5) ./configure And I get the following error: configure: error: cannot find install-sh or install.sh in ./../.. There is actually an "install-sh" file in the gcc-3.4.6 directory, but that's one directory up the current, not two. The configure script should look for install-sh in "./.." insted of "./../.." ?? What's wrong??
|
linux, gcc, installation, redhat, rhel
| 2
| 14,500
| 4
|
https://stackoverflow.com/questions/216102/installing-gcc-3-4-6-in-rhel4
|
78,129,045
|
Puppet Unknown variable: 'osfamily' on Rocky 9
|
I am seeing the following error... Error: Evaluation Error: Unknown variable: 'osfamily'. (file: /etc/puppetlabs/code/environments/production/manifests/print_text.pp, line: 1, column: 4) on node mgmtserver.mydomain.com ...when attempting to run a manifest on the Puppet Server... puppet apply /etc/puppetlabs/code/environments/production/manifests/print_text.pp The manifest is: if $osfamily == 'RedHat' { notice("A test message") } The following returns results... puppet facts osfamily { "osfamily": "RedHat" } ...as does the following: facter osfamily RedHat The Puppet Server is: OS: Rocky version 9 Family: RedHat Puppet version: 8.5.0 puppetserver.service (master) is running on the host but puppet.service (agent) is not. Any suggestions?
|
Puppet Unknown variable: 'osfamily' on Rocky 9 I am seeing the following error... Error: Evaluation Error: Unknown variable: 'osfamily'. (file: /etc/puppetlabs/code/environments/production/manifests/print_text.pp, line: 1, column: 4) on node mgmtserver.mydomain.com ...when attempting to run a manifest on the Puppet Server... puppet apply /etc/puppetlabs/code/environments/production/manifests/print_text.pp The manifest is: if $osfamily == 'RedHat' { notice("A test message") } The following returns results... puppet facts osfamily { "osfamily": "RedHat" } ...as does the following: facter osfamily RedHat The Puppet Server is: OS: Rocky version 9 Family: RedHat Puppet version: 8.5.0 puppetserver.service (master) is running on the host but puppet.service (agent) is not. Any suggestions?
|
puppet, redhat, rocky-os
| 2
| 1,273
| 2
|
https://stackoverflow.com/questions/78129045/puppet-unknown-variable-osfamily-on-rocky-9
|
68,215,364
|
sudo yum update throws Requires ... Removing ... Obsoleted By
|
I'm on an Amazon Linux box: # this command returns Amazon Linux AMI release 2018.03 cat /etc/system-release When I try to sudo yum update , I get this output . I can't post the full output on SO but here is the part that bombs out: --> Finished Dependency Resolution Error: Package: iproute-4.4.0-3.23.amzn1.x86_64 (installed) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: libdb4 conflicts with filesystem-2.4.30-3.8.amzn1.x86_64 Error: Package: rpm-4.11.3-40.78.amzn1.x86_64 (amzn-updates) Requires: /usr/bin/db_stat Removing: db4-utils-4.7.25-18.11.amzn1.x86_64 (installed) Not found Obsoleted By: libdb4-utils-4.8.30-13.el7.x86_64 (epel) Not found Error: Package: rpm-build-4.11.3-40.78.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: pam-1.1.8-12.33.amzn1.x86_64 (installed) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: libserf-1.3.7-1.7.amzn1.x86_64 (@amzn-main) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: rpm-build-libs-4.11.3-40.78.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: rpm-4.11.3-40.78.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: rpm-python27-4.11.3-40.78.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: ruby20-libs-2.0.0.648-2.40.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: cyrus-sasl-lib-2.1.23-13.16.amzn1.x86_64 (installed) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: subversion-libs-1.9.7-1.61.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: python26-2.6.9-2.92.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: sendmail-8.14.4-9.14.amzn1.x86_64 (installed) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: subversion-1.9.7-1.61.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: pam_ccreds-10-4.9.amzn1.x86_64 (installed) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: cyrus-sasl-2.1.23-13.16.amzn1.x86_64 (installed) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: apr-util-1.5.4-6.18.amzn1.x86_64 (@amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: python27-libs-2.7.18-2.141.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: rpm-libs-4.11.3-40.78.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Does anyone know how I can repair my yum update command? Any pointers would be very helpful!
|
sudo yum update throws Requires ... Removing ... Obsoleted By I'm on an Amazon Linux box: # this command returns Amazon Linux AMI release 2018.03 cat /etc/system-release When I try to sudo yum update , I get this output . I can't post the full output on SO but here is the part that bombs out: --> Finished Dependency Resolution Error: Package: iproute-4.4.0-3.23.amzn1.x86_64 (installed) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: libdb4 conflicts with filesystem-2.4.30-3.8.amzn1.x86_64 Error: Package: rpm-4.11.3-40.78.amzn1.x86_64 (amzn-updates) Requires: /usr/bin/db_stat Removing: db4-utils-4.7.25-18.11.amzn1.x86_64 (installed) Not found Obsoleted By: libdb4-utils-4.8.30-13.el7.x86_64 (epel) Not found Error: Package: rpm-build-4.11.3-40.78.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: pam-1.1.8-12.33.amzn1.x86_64 (installed) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: libserf-1.3.7-1.7.amzn1.x86_64 (@amzn-main) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: rpm-build-libs-4.11.3-40.78.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: rpm-4.11.3-40.78.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: rpm-python27-4.11.3-40.78.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: ruby20-libs-2.0.0.648-2.40.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: cyrus-sasl-lib-2.1.23-13.16.amzn1.x86_64 (installed) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: subversion-libs-1.9.7-1.61.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: python26-2.6.9-2.92.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: sendmail-8.14.4-9.14.amzn1.x86_64 (installed) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: subversion-1.9.7-1.61.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: pam_ccreds-10-4.9.amzn1.x86_64 (installed) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: cyrus-sasl-2.1.23-13.16.amzn1.x86_64 (installed) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: apr-util-1.5.4-6.18.amzn1.x86_64 (@amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: python27-libs-2.7.18-2.141.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) Error: Package: rpm-libs-4.11.3-40.78.amzn1.x86_64 (amzn-updates) Requires: libdb-4.7.so()(64bit) Removing: db4-4.7.25-18.11.amzn1.x86_64 (installed) libdb-4.7.so()(64bit) Obsoleted By: libdb4-4.8.30-13.el7.x86_64 (epel) ~libdb-4.8.so()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Does anyone know how I can repair my yum update command? Any pointers would be very helpful!
|
linux, redhat, yum, amazon-linux
| 2
| 1,389
| 1
|
https://stackoverflow.com/questions/68215364/sudo-yum-update-throws-requires-removing-obsoleted-by
|
67,684,949
|
ANSIBLE “ERROR! the field 'hosts' is required but was not set”
|
I have a CENTOS 7 VM with ansible installed, and I am trying to install the HTTPD service with ansible on a RED HAT 8. File content: "hosts" [ubuntuserver] 192.168.1.51 [redhat] 192.168.56.102 "playbook.yaml" [root @ centos7 ansible] # cat playbook.yaml --- - hosts: redhat - remote_user: root tasks: - name: install apache yum: name = httpd [root @ centos7 ansible] # Error I get: error
|
ANSIBLE “ERROR! the field 'hosts' is required but was not set” I have a CENTOS 7 VM with ansible installed, and I am trying to install the HTTPD service with ansible on a RED HAT 8. File content: "hosts" [ubuntuserver] 192.168.1.51 [redhat] 192.168.56.102 "playbook.yaml" [root @ centos7 ansible] # cat playbook.yaml --- - hosts: redhat - remote_user: root tasks: - name: install apache yum: name = httpd [root @ centos7 ansible] # Error I get: error
|
linux, ansible, centos, redhat
| 2
| 5,232
| 2
|
https://stackoverflow.com/questions/67684949/ansible-error-the-field-hosts-is-required-but-was-not-set
|
62,635,963
|
Invalid syntax when importing openpyxl
|
Python 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import openpyxl Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<user>/.local/lib/python2.7/site-packages/openpyxl/__init__.py", line 6, in <module> from openpyxl.workbook import Workbook File "<user>/.local/lib/python2.7/site-packages/openpyxl/workbook/__init__.py", line 4, in <module> from .workbook import Workbook File "<user>/.local/lib/python2.7/site-packages/openpyxl/workbook/workbook.py", line 7, in <module> from openpyxl.worksheet.worksheet import Worksheet File "<user>/.local/lib/python2.7/site-packages/openpyxl/worksheet/worksheet.py", line 396 return f"{get_column_letter(min_col)}{min_row}:{get_column_letter(max_col)}{max_row}" Is there any additional packages needs to be installed? Can anyone let me know the reason behind this behavior, since it is working fine on windows and macos.
|
Invalid syntax when importing openpyxl Python 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import openpyxl Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<user>/.local/lib/python2.7/site-packages/openpyxl/__init__.py", line 6, in <module> from openpyxl.workbook import Workbook File "<user>/.local/lib/python2.7/site-packages/openpyxl/workbook/__init__.py", line 4, in <module> from .workbook import Workbook File "<user>/.local/lib/python2.7/site-packages/openpyxl/workbook/workbook.py", line 7, in <module> from openpyxl.worksheet.worksheet import Worksheet File "<user>/.local/lib/python2.7/site-packages/openpyxl/worksheet/worksheet.py", line 396 return f"{get_column_letter(min_col)}{min_row}:{get_column_letter(max_col)}{max_row}" Is there any additional packages needs to be installed? Can anyone let me know the reason behind this behavior, since it is working fine on windows and macos.
|
python, excel, unix, redhat, openpyxl
| 2
| 4,424
| 1
|
https://stackoverflow.com/questions/62635963/invalid-syntax-when-importing-openpyxl
|
47,987,538
|
Redis "--protected-mode no" not persistent data on disk
|
I have install redis on redhat server. When i run redis server with below command $ ./redis-server --protected-mode no and then when i restart my redis-server all data which store in redis is deleted. but when i run normal redis server command to start then its working fine. $ ./redis-server I have check redis config file its has appendonly yes but i don't know why its not persistent its data with protect mode. Is there any way to use protected-mode and save data on disk with redis. I am using redis 4.0.1 version you can check in screenshot first screenshot i run with without protected mode. when i close request its says data saving to disk but when i run with protected mode check what hapeen Its not saving data on disk. I have attach some part of config file which i have changed. I have already using appendonly yes , don't know what i am doing wrong. ################################ SNAPSHOTTING ################################ # # Save the DB on disk: # # save <seconds> <changes> # # Will save the DB if both the given number of seconds and the given # number of write operations against the DB occurred. # # In the example below the behaviour will be to save: # after 900 sec (15 min) if at least 1 key changed # after 300 sec (5 min) if at least 10 keys changed # after 60 sec if at least 10000 keys changed # # Note: you can disable saving completely by commenting out all "save" lines. # # It is also possible to remove all the previously configured save # points by adding a save directive with a single empty string argument # like in the following example: # # save "" save 900 1 save 300 10 save 60 10000 save 1 1 # By default Redis will stop accepting writes if RDB snapshots are enabled # (at least one save point) and the latest background save failed. # This will make the user aware (in a hard way) that data is not persisting # on disk properly, otherwise chances are that no one will notice and some # disaster will happen. # # If the background saving process will start working again Redis will # automatically allow writes again. # # However if you have setup your proper monitoring of the Redis server # and persistence, you may want to disable this feature so that Redis will # continue to work as usual even if there are problems with disk, # permissions, and so forth. stop-writes-on-bgsave-error yes # Compress string objects using LZF when dump .rdb databases? # For default that's set to 'yes' as it's almost always a win. # If you want to save some CPU in the saving child set it to 'no' but # the dataset will likely be bigger if you have compressible values or keys. rdbcompression yes # Since version 5 of RDB a CRC64 checksum is placed at the end of the file. # This makes the format more resistant to corruption but there is a performance # hit to pay (around 10%) when saving and loading RDB files, so you can disable it # for maximum performances. # # RDB files created with checksum disabled have a checksum of zero that will # tell the loading code to skip the check. rdbchecksum yes # The filename where to dump the DB dbfilename dump.rdb # The working directory. # # The DB will be written inside this directory, with the filename specified # above using the 'dbfilename' configuration directive. # # The Append Only File will also be created inside this directory. # # Note that you must specify a directory here, not a file name. dir ./ ############################## APPEND ONLY MODE ############################### # By default Redis asynchronously dumps the dataset on disk. This mode is # good enough in many applications, but an issue with the Redis process or # a power outage may result into a few minutes of writes lost (depending on # the configured save points). # # The Append Only File is an alternative persistence mode that provides # much better durability. For instance using the default data fsync policy # (see later in the config file) Redis can lose just one second of writes in a # dramatic event like a server power outage, or a single write if something # wrong with the Redis process itself happens, but the operating system is # still running correctly. # # AOF and RDB persistence can be enabled at the same time without problems. # If the AOF is enabled on startup Redis will load the AOF, that is the file # with the better durability guarantees. # # Please check [URL] for more information. appendonly yes # The name of the append only file (default: "appendonly.aof") appendfilename "appendonly.aof" # The fsync() call tells the Operating System to actually write data on disk # instead of waiting for more data in the output buffer. Some OS will really flush # data on disk, some other OS will just try to do it ASAP. # # Redis supports three different modes: # # no: don't fsync, just let the OS flush the data when it wants. Faster. # always: fsync after every write to the append only log. Slow, Safest. # everysec: fsync only one time every second. Compromise. # # The default is "everysec", as that's usually the right compromise between # speed and data safety. It's up to you to understand if you can relax this to # "no" that will let the operating system flush the output buffer when # it wants, for better performances (but if you can live with the idea of # some data loss consider the default persistence mode that's snapshotting), # or on the contrary, use "always" that's very slow but a bit safer than # everysec. # # More details please check the following article: # [URL] # # If unsure, use "everysec". # appendfsync always appendfsync everysec # appendfsync no # When the AOF fsync policy is set to always or everysec, and a background # saving process (a background save or AOF log background rewriting) is # performing a lot of I/O against the disk, in some Linux configurations # Redis may block too long on the fsync() call. Note that there is no fix for # this currently, as even performing fsync in a different thread will block # our synchronous write(2) call. # # In order to mitigate this problem it's possible to use the following option # that will prevent fsync() from being called in the main process while a # BGSAVE or BGREWRITEAOF is in progress. # # This means that while another child is saving, the durability of Redis is # the same as "appendfsync none". In practical terms, this means that it is # possible to lose up to 30 seconds of log in the worst scenario (with the # default Linux settings). # # If you have latency problems turn this to "yes". Otherwise leave it as # "no" that is the safest pick from the point of view of durability. no-appendfsync-on-rewrite no # Automatic rewrite of the append only file. # Redis is able to automatically rewrite the log file implicitly calling # BGREWRITEAOF when the AOF log size grows by the specified percentage. # # This is how it works: Redis remembers the size of the AOF file after the # latest rewrite (if no rewrite has happened since the restart, the size of # the AOF at startup is used). # # This base size is compared to the current size. If the current size is # bigger than the specified percentage, the rewrite is triggered. Also # you need to specify a minimal size for the AOF file to be rewritten, this # is useful to avoid rewriting the AOF file even if the percentage increase # is reached but it is still pretty small. # # Specify a percentage of zero in order to disable the automatic AOF # rewrite feature. auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb # An AOF file may be found to be truncated at the end during the Redis # startup process, when the AOF data gets loaded back into memory. # This may happen when the system where Redis is running # crashes, especially when an ext4 filesystem is mounted without the # data=ordered option (however this can't happen when Redis itself # crashes or aborts but the operating system still works correctly). # # Redis can either exit with an error when this happens, or load as much # data as possible (the default now) and start if the AOF file is found # to be truncated at the end. The following option controls this behavior. # # If aof-load-truncated is set to yes, a truncated AOF file is loaded and # the Redis server starts emitting a log to inform the user of the event. # Otherwise if the option is set to no, the server aborts with an error # and refuses to start. When the option is set to no, the user requires # to fix the AOF file using the "redis-check-aof" utility before to restart # the server. # # Note that if the AOF file will be found to be corrupted in the middle # the server will still exit with an error. This option only applies when # Redis will try to read more data from the AOF file but not enough bytes # will be found. aof-load-truncated yes # When rewriting the AOF file, Redis is able to use an RDB preamble in the # AOF file for faster rewrites and recoveries. When this option is turned # on the rewritten AOF file is composed of two different stanzas: # # [RDB file][AOF tail] # # When loading Redis recognizes that the AOF file starts with the "REDIS" # string and loads the prefixed RDB file, and continues loading the AOF # tail. # # This is currently turned off by default in order to avoid the surprise # of a format change, but will at some point be used as the default. aof-use-rdb-preamble no
|
Redis "--protected-mode no" not persistent data on disk I have install redis on redhat server. When i run redis server with below command $ ./redis-server --protected-mode no and then when i restart my redis-server all data which store in redis is deleted. but when i run normal redis server command to start then its working fine. $ ./redis-server I have check redis config file its has appendonly yes but i don't know why its not persistent its data with protect mode. Is there any way to use protected-mode and save data on disk with redis. I am using redis 4.0.1 version you can check in screenshot first screenshot i run with without protected mode. when i close request its says data saving to disk but when i run with protected mode check what hapeen Its not saving data on disk. I have attach some part of config file which i have changed. I have already using appendonly yes , don't know what i am doing wrong. ################################ SNAPSHOTTING ################################ # # Save the DB on disk: # # save <seconds> <changes> # # Will save the DB if both the given number of seconds and the given # number of write operations against the DB occurred. # # In the example below the behaviour will be to save: # after 900 sec (15 min) if at least 1 key changed # after 300 sec (5 min) if at least 10 keys changed # after 60 sec if at least 10000 keys changed # # Note: you can disable saving completely by commenting out all "save" lines. # # It is also possible to remove all the previously configured save # points by adding a save directive with a single empty string argument # like in the following example: # # save "" save 900 1 save 300 10 save 60 10000 save 1 1 # By default Redis will stop accepting writes if RDB snapshots are enabled # (at least one save point) and the latest background save failed. # This will make the user aware (in a hard way) that data is not persisting # on disk properly, otherwise chances are that no one will notice and some # disaster will happen. # # If the background saving process will start working again Redis will # automatically allow writes again. # # However if you have setup your proper monitoring of the Redis server # and persistence, you may want to disable this feature so that Redis will # continue to work as usual even if there are problems with disk, # permissions, and so forth. stop-writes-on-bgsave-error yes # Compress string objects using LZF when dump .rdb databases? # For default that's set to 'yes' as it's almost always a win. # If you want to save some CPU in the saving child set it to 'no' but # the dataset will likely be bigger if you have compressible values or keys. rdbcompression yes # Since version 5 of RDB a CRC64 checksum is placed at the end of the file. # This makes the format more resistant to corruption but there is a performance # hit to pay (around 10%) when saving and loading RDB files, so you can disable it # for maximum performances. # # RDB files created with checksum disabled have a checksum of zero that will # tell the loading code to skip the check. rdbchecksum yes # The filename where to dump the DB dbfilename dump.rdb # The working directory. # # The DB will be written inside this directory, with the filename specified # above using the 'dbfilename' configuration directive. # # The Append Only File will also be created inside this directory. # # Note that you must specify a directory here, not a file name. dir ./ ############################## APPEND ONLY MODE ############################### # By default Redis asynchronously dumps the dataset on disk. This mode is # good enough in many applications, but an issue with the Redis process or # a power outage may result into a few minutes of writes lost (depending on # the configured save points). # # The Append Only File is an alternative persistence mode that provides # much better durability. For instance using the default data fsync policy # (see later in the config file) Redis can lose just one second of writes in a # dramatic event like a server power outage, or a single write if something # wrong with the Redis process itself happens, but the operating system is # still running correctly. # # AOF and RDB persistence can be enabled at the same time without problems. # If the AOF is enabled on startup Redis will load the AOF, that is the file # with the better durability guarantees. # # Please check [URL] for more information. appendonly yes # The name of the append only file (default: "appendonly.aof") appendfilename "appendonly.aof" # The fsync() call tells the Operating System to actually write data on disk # instead of waiting for more data in the output buffer. Some OS will really flush # data on disk, some other OS will just try to do it ASAP. # # Redis supports three different modes: # # no: don't fsync, just let the OS flush the data when it wants. Faster. # always: fsync after every write to the append only log. Slow, Safest. # everysec: fsync only one time every second. Compromise. # # The default is "everysec", as that's usually the right compromise between # speed and data safety. It's up to you to understand if you can relax this to # "no" that will let the operating system flush the output buffer when # it wants, for better performances (but if you can live with the idea of # some data loss consider the default persistence mode that's snapshotting), # or on the contrary, use "always" that's very slow but a bit safer than # everysec. # # More details please check the following article: # [URL] # # If unsure, use "everysec". # appendfsync always appendfsync everysec # appendfsync no # When the AOF fsync policy is set to always or everysec, and a background # saving process (a background save or AOF log background rewriting) is # performing a lot of I/O against the disk, in some Linux configurations # Redis may block too long on the fsync() call. Note that there is no fix for # this currently, as even performing fsync in a different thread will block # our synchronous write(2) call. # # In order to mitigate this problem it's possible to use the following option # that will prevent fsync() from being called in the main process while a # BGSAVE or BGREWRITEAOF is in progress. # # This means that while another child is saving, the durability of Redis is # the same as "appendfsync none". In practical terms, this means that it is # possible to lose up to 30 seconds of log in the worst scenario (with the # default Linux settings). # # If you have latency problems turn this to "yes". Otherwise leave it as # "no" that is the safest pick from the point of view of durability. no-appendfsync-on-rewrite no # Automatic rewrite of the append only file. # Redis is able to automatically rewrite the log file implicitly calling # BGREWRITEAOF when the AOF log size grows by the specified percentage. # # This is how it works: Redis remembers the size of the AOF file after the # latest rewrite (if no rewrite has happened since the restart, the size of # the AOF at startup is used). # # This base size is compared to the current size. If the current size is # bigger than the specified percentage, the rewrite is triggered. Also # you need to specify a minimal size for the AOF file to be rewritten, this # is useful to avoid rewriting the AOF file even if the percentage increase # is reached but it is still pretty small. # # Specify a percentage of zero in order to disable the automatic AOF # rewrite feature. auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb # An AOF file may be found to be truncated at the end during the Redis # startup process, when the AOF data gets loaded back into memory. # This may happen when the system where Redis is running # crashes, especially when an ext4 filesystem is mounted without the # data=ordered option (however this can't happen when Redis itself # crashes or aborts but the operating system still works correctly). # # Redis can either exit with an error when this happens, or load as much # data as possible (the default now) and start if the AOF file is found # to be truncated at the end. The following option controls this behavior. # # If aof-load-truncated is set to yes, a truncated AOF file is loaded and # the Redis server starts emitting a log to inform the user of the event. # Otherwise if the option is set to no, the server aborts with an error # and refuses to start. When the option is set to no, the user requires # to fix the AOF file using the "redis-check-aof" utility before to restart # the server. # # Note that if the AOF file will be found to be corrupted in the middle # the server will still exit with an error. This option only applies when # Redis will try to read more data from the AOF file but not enough bytes # will be found. aof-load-truncated yes # When rewriting the AOF file, Redis is able to use an RDB preamble in the # AOF file for faster rewrites and recoveries. When this option is turned # on the rewritten AOF file is composed of two different stanzas: # # [RDB file][AOF tail] # # When loading Redis recognizes that the AOF file starts with the "REDIS" # string and loads the prefixed RDB file, and continues loading the AOF # tail. # # This is currently turned off by default in order to avoid the surprise # of a format change, but will at some point be used as the default. aof-use-rdb-preamble no
|
linux, redis, redhat, redis-server
| 2
| 8,509
| 1
|
https://stackoverflow.com/questions/47987538/redis-protected-mode-no-not-persistent-data-on-disk
|
47,890,444
|
If two Apache HTTP servers are installed in RedHat, how to make them not disturbing each other
|
I have already installed an Apache HTTP server in my RedHat system, now I need to install a Bitnami application package which contains another Apache. So I am wondering how to make them not disturbing each other? I guess I need to configure different ports for the two HTTP server. But what if one has 8080 and another has 9090, will we visit [URL] and [URL] ? I think this way is quite inconvenient. Am I wrong or any better idea?
|
If two Apache HTTP servers are installed in RedHat, how to make them not disturbing each other I have already installed an Apache HTTP server in my RedHat system, now I need to install a Bitnami application package which contains another Apache. So I am wondering how to make them not disturbing each other? I guess I need to configure different ports for the two HTTP server. But what if one has 8080 and another has 9090, will we visit [URL] and [URL] ? I think this way is quite inconvenient. Am I wrong or any better idea?
|
apache, redhat, bitnami
| 2
| 111
| 3
|
https://stackoverflow.com/questions/47890444/if-two-apache-http-servers-are-installed-in-redhat-how-to-make-them-not-disturb
|
45,113,261
|
Why is rpmbuild requiring these C++ libraries, which cause this error?
|
I'm new to RPM packaging, but rpmbuild seems to be requiring the C++ standard libraries, and I don't know why. Here is the RPM spec file: Name: go-github-release-test Version: 0.0.1 Release: 1 License: LICENSE Url: Summary: Test of go-github-release process %description Test of go-github-release process %prep %build %install mkdir -p %{buildroot}/%{_bindir} cp /root/go-github-release-test/build/go-github-release-test %{buildroot}/%{_bindir} %files %{_bindir}/go-github-release-test %clean %changelog * Fri Jun 09 2017 Jerry W - 0.0.1-1 - added text to readme - add CmakeLists.txt - add appveyor.yml and travis.yml - add gitignore - moved main cpp around - added helloworld.cpp - added detectme.txt - removed test dirlist - added readme - init: bump script - initial commit Here is the log showing that it's failing to generate a "noarch" package because it's including arch specific C++ libraries, even though I have not referenced them anywhere: [root@localhost go-github-release-test]# rpmbuild --target noarch -bb pkg-build/SPECS/go-github-release-test.spec --define "_topdir /root/go-github-release-test/pkg-build" Building target platforms: noarch Building for target noarch Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.aEY2Y9 + umask 022 + cd /root/go-github-release-test/pkg-build/BUILD + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.jOeknE + umask 022 + cd /root/go-github-release-test/pkg-build/BUILD + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.PZA4L8 + umask 022 + cd /root/go-github-release-test/pkg-build/BUILD + '[' /root/go-github-release-test/pkg-build/BUILDROOT/go-github-release-test-0.0.1-1.noarch '!=' / ']' + rm -rf /root/go-github-release-test/pkg-build/BUILDROOT/go-github-release-test-0.0.1-1.noarch ++ dirname /root/go-github-release-test/pkg-build/BUILDROOT/go-github-release-test-0.0.1-1.noarch + mkdir -p /root/go-github-release-test/pkg-build/BUILDROOT + mkdir /root/go-github-release-test/pkg-build/BUILDROOT/go-github-release-test-0.0.1-1.noarch + mkdir -p /root/go-github-release-test/pkg-build/BUILDROOT/go-github-release-test-0.0.1-1.noarch//usr/bin + cp /root/go-github-release-test/build/go-github-release-test /root/go-github-release-test/pkg-build/BUILDROOT/go-github-release-test-0.0.1-1.noarch//usr/bin + /usr/lib/rpm/check-buildroot + /usr/lib/rpm/redhat/brp-compress + /usr/lib/rpm/redhat/brp-strip /usr/bin/strip + /usr/lib/rpm/redhat/brp-strip-comment-note /usr/bin/strip /usr/bin/objdump + /usr/lib/rpm/redhat/brp-strip-static-archive /usr/bin/strip + /usr/lib/rpm/brp-python-bytecompile /usr/bin/python 1 + /usr/lib/rpm/redhat/brp-python-hardlink + /usr/lib/rpm/redhat/brp-java-repack-jars Processing files: go-github-release-test-0.0.1-1.noarch Provides: go-github-release-test = 0.0.1-1 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libgcc_s.so.1()(64bit) libm.so.6()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) rtld(GNU_HASH) error: Arch dependent binaries in noarch package RPM build errors: Arch dependent binaries in noarch package
|
Why is rpmbuild requiring these C++ libraries, which cause this error? I'm new to RPM packaging, but rpmbuild seems to be requiring the C++ standard libraries, and I don't know why. Here is the RPM spec file: Name: go-github-release-test Version: 0.0.1 Release: 1 License: LICENSE Url: Summary: Test of go-github-release process %description Test of go-github-release process %prep %build %install mkdir -p %{buildroot}/%{_bindir} cp /root/go-github-release-test/build/go-github-release-test %{buildroot}/%{_bindir} %files %{_bindir}/go-github-release-test %clean %changelog * Fri Jun 09 2017 Jerry W - 0.0.1-1 - added text to readme - add CmakeLists.txt - add appveyor.yml and travis.yml - add gitignore - moved main cpp around - added helloworld.cpp - added detectme.txt - removed test dirlist - added readme - init: bump script - initial commit Here is the log showing that it's failing to generate a "noarch" package because it's including arch specific C++ libraries, even though I have not referenced them anywhere: [root@localhost go-github-release-test]# rpmbuild --target noarch -bb pkg-build/SPECS/go-github-release-test.spec --define "_topdir /root/go-github-release-test/pkg-build" Building target platforms: noarch Building for target noarch Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.aEY2Y9 + umask 022 + cd /root/go-github-release-test/pkg-build/BUILD + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.jOeknE + umask 022 + cd /root/go-github-release-test/pkg-build/BUILD + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.PZA4L8 + umask 022 + cd /root/go-github-release-test/pkg-build/BUILD + '[' /root/go-github-release-test/pkg-build/BUILDROOT/go-github-release-test-0.0.1-1.noarch '!=' / ']' + rm -rf /root/go-github-release-test/pkg-build/BUILDROOT/go-github-release-test-0.0.1-1.noarch ++ dirname /root/go-github-release-test/pkg-build/BUILDROOT/go-github-release-test-0.0.1-1.noarch + mkdir -p /root/go-github-release-test/pkg-build/BUILDROOT + mkdir /root/go-github-release-test/pkg-build/BUILDROOT/go-github-release-test-0.0.1-1.noarch + mkdir -p /root/go-github-release-test/pkg-build/BUILDROOT/go-github-release-test-0.0.1-1.noarch//usr/bin + cp /root/go-github-release-test/build/go-github-release-test /root/go-github-release-test/pkg-build/BUILDROOT/go-github-release-test-0.0.1-1.noarch//usr/bin + /usr/lib/rpm/check-buildroot + /usr/lib/rpm/redhat/brp-compress + /usr/lib/rpm/redhat/brp-strip /usr/bin/strip + /usr/lib/rpm/redhat/brp-strip-comment-note /usr/bin/strip /usr/bin/objdump + /usr/lib/rpm/redhat/brp-strip-static-archive /usr/bin/strip + /usr/lib/rpm/brp-python-bytecompile /usr/bin/python 1 + /usr/lib/rpm/redhat/brp-python-hardlink + /usr/lib/rpm/redhat/brp-java-repack-jars Processing files: go-github-release-test-0.0.1-1.noarch Provides: go-github-release-test = 0.0.1-1 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: libc.so.6()(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libgcc_s.so.1()(64bit) libm.so.6()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) rtld(GNU_HASH) error: Arch dependent binaries in noarch package RPM build errors: Arch dependent binaries in noarch package
|
centos, redhat, fedora, rpm, rpmbuild
| 2
| 893
| 1
|
https://stackoverflow.com/questions/45113261/why-is-rpmbuild-requiring-these-c-libraries-which-cause-this-error
|
42,868,275
|
How to remove yum repositories
|
(This seems to be a rather straightforward question but I can't find the answer on SOF. Let me know if I missed a post and I will delete this!) Hello! How to delete the repositories that are listed in yum repolist ? For example, when I ran yum reoplist , I got (for example): repo id repo name pgdg93/7Server/x86_64 PostgreSQL 9.3 7Server - x86_64 But man yum does not tell me how to remove the repo if they are no longer in use (e.g. ). I tried sudo yum-config-manager --disable pgdg93/7Server/x86_64 but the result of yum repolist is the same. Btw, this repo is installed through rpm install [url] Thanks!
|
How to remove yum repositories (This seems to be a rather straightforward question but I can't find the answer on SOF. Let me know if I missed a post and I will delete this!) Hello! How to delete the repositories that are listed in yum repolist ? For example, when I ran yum reoplist , I got (for example): repo id repo name pgdg93/7Server/x86_64 PostgreSQL 9.3 7Server - x86_64 But man yum does not tell me how to remove the repo if they are no longer in use (e.g. ). I tried sudo yum-config-manager --disable pgdg93/7Server/x86_64 but the result of yum repolist is the same. Btw, this repo is installed through rpm install [url] Thanks!
|
redhat, rpm, yum
| 2
| 36,198
| 2
|
https://stackoverflow.com/questions/42868275/how-to-remove-yum-repositories
|
41,117,011
|
How to optimize pigz?
|
I am using pigz to compress a large directory, which is nearly 50GB, I have an ec2 instance, with RedHat, the instance type is m4.xlarge, which has 4 CPUs, I am expecting the compression will eat up all my CPUs and have a better performance. but it didn't meet my expectation. the command I am using: tar -cf - lager-dir | pigz > dest.tar.gz But when the compress is running, I use mpstat -P ALL to check my CPU status, the result shows a lot of %idle for other 3 CPUs, only nearly 2% are used by user space process for each CPU. Also tried to use top to check that pigz only use less than 10% of the CPU. Tried with -p 10 to increase the processes count, then it has a high usage for a few minutes, but dropped down when the output file reach to 2.7 GB. So I have all CPU only used for the compression, I want to fully utilize all of my resources to gain the best performance, how can I get there?
|
How to optimize pigz? I am using pigz to compress a large directory, which is nearly 50GB, I have an ec2 instance, with RedHat, the instance type is m4.xlarge, which has 4 CPUs, I am expecting the compression will eat up all my CPUs and have a better performance. but it didn't meet my expectation. the command I am using: tar -cf - lager-dir | pigz > dest.tar.gz But when the compress is running, I use mpstat -P ALL to check my CPU status, the result shows a lot of %idle for other 3 CPUs, only nearly 2% are used by user space process for each CPU. Also tried to use top to check that pigz only use less than 10% of the CPU. Tried with -p 10 to increase the processes count, then it has a high usage for a few minutes, but dropped down when the output file reach to 2.7 GB. So I have all CPU only used for the compression, I want to fully utilize all of my resources to gain the best performance, how can I get there?
|
linux, compression, redhat, tar, large-files
| 2
| 2,775
| 1
|
https://stackoverflow.com/questions/41117011/how-to-optimize-pigz
|
40,229,096
|
install mod_ssl issue apache 2.4 aws linux
|
Using aws Linux . When I try to install mod_ssl its give a conflict error with ttpd-tools-2.2.31-1.8.amzn1.x86_64 and httpd-2.2.31-1.8.amzn1.x86_64. Tried yum remove but it's not working. When I do a yum list the old httpd version is not getting listed. Not sure why is it. Could anyone help me out regarding this. [root@ip-61 ec2-user]# yum install mod_ssl Loaded plugins: priorities, update-motd, upgrade-helper Resolving Dependencies --> Running transaction check ---> Package mod_ssl.x86_64 1:2.2.31-1.8.amzn1 will be installed --> Processing Dependency: httpd = 2.2.31-1.8.amzn1 for package: 1:mod_ssl-2.2.31-1.8.amzn1.x86_64 --> Processing Dependency: httpd-mmn = 20051115 for package: 1:mod_ssl-2.2.31-1.8.amzn1.x86_64 --> Running transaction check ---> Package httpd.x86_64 0:2.2.31-1.8.amzn1 will be installed --> Processing Dependency: httpd-tools = 2.2.31-1.8.amzn1 for package: httpd-2.2.31-1.8.amzn1.x86_64 --> Processing Dependency: apr-util-ldap for package: httpd-2.2.31-1.8.amzn1.x86_64 --> Running transaction check ---> Package apr-util-ldap.x86_64 0:1.4.1-4.17.amzn1 will be installed ---> Package httpd-tools.x86_64 0:2.2.31-1.8.amzn1 will be installed --> Processing Conflict: httpd24-2.4.18-1.64.amzn1.x86_64 conflicts httpd < 2.4.18 --> Restarting Dependency Resolution with new changes. --> Running transaction check ---> Package httpd24.x86_64 0:2.4.18-1.64.amzn1 will be updated ---> Package httpd24.x86_64 0:2.4.23-1.66.amzn1 will be an update --> Processing Dependency: httpd24-tools = 2.4.23-1.66.amzn1 for package: httpd24-2.4.23-1.66.amzn1.x86_64 --> Running transaction check ---> Package httpd24-tools.x86_64 0:2.4.18-1.64.amzn1 will be updated ---> Package httpd24-tools.x86_64 0:2.4.23-1.66.amzn1 will be an update --> Processing Conflict: httpd24-2.4.23-1.66.amzn1.x86_64 conflicts httpd < 2.4.23 --> Processing Conflict: httpd24-tools-2.4.23-1.66.amzn1.x86_64 conflicts httpd-tools < 2.4.23 --> Finished Dependency Resolution Error: httpd24-tools conflicts with httpd-tools-2.2.31-1.8.amzn1.x86_64 Error: httpd24 conflicts with httpd-2.2.31-1.8.amzn1.x86_64 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest [root@ip-61 ec2-user]# yum l [root@ip-61 ec2-user]# yum list installed | grep -in httpd 120:httpd24.x86_64 2.4.18-1.64.amzn1 @amzn-main 121:httpd24-tools.x86_64 2.4.18-1.64.amzn1 @amzn-main [root@ip-61 ec2-user]# yum remove httpd-tools-2.2.31-1.8.amzn1.x86_64 Loaded plugins: priorities, update-motd, upgrade-helper No Match for argument: httpd-tools-2.2.31-1.8.amzn1.x86_64 No Packages marked for removal [root@ip-61 ec2-user]# yum remove httpd-2.2.31-1.8.amzn1.x86_64 Loaded plugins: priorities, update-motd, upgrade-helper No Match for argument: httpd-2.2.31-1.8.amzn1.x86_64 No Packages marked for removal [root@ip-61 ec2-user]# ]# yum list installed | grep -in httpd 120:httpd24.x86_64 2.4.18-1.64.amzn1 @amzn-main 121:httpd24-tools.x86_64 2.4.18-1.64.amzn1 @amzn-main
|
install mod_ssl issue apache 2.4 aws linux Using aws Linux . When I try to install mod_ssl its give a conflict error with ttpd-tools-2.2.31-1.8.amzn1.x86_64 and httpd-2.2.31-1.8.amzn1.x86_64. Tried yum remove but it's not working. When I do a yum list the old httpd version is not getting listed. Not sure why is it. Could anyone help me out regarding this. [root@ip-61 ec2-user]# yum install mod_ssl Loaded plugins: priorities, update-motd, upgrade-helper Resolving Dependencies --> Running transaction check ---> Package mod_ssl.x86_64 1:2.2.31-1.8.amzn1 will be installed --> Processing Dependency: httpd = 2.2.31-1.8.amzn1 for package: 1:mod_ssl-2.2.31-1.8.amzn1.x86_64 --> Processing Dependency: httpd-mmn = 20051115 for package: 1:mod_ssl-2.2.31-1.8.amzn1.x86_64 --> Running transaction check ---> Package httpd.x86_64 0:2.2.31-1.8.amzn1 will be installed --> Processing Dependency: httpd-tools = 2.2.31-1.8.amzn1 for package: httpd-2.2.31-1.8.amzn1.x86_64 --> Processing Dependency: apr-util-ldap for package: httpd-2.2.31-1.8.amzn1.x86_64 --> Running transaction check ---> Package apr-util-ldap.x86_64 0:1.4.1-4.17.amzn1 will be installed ---> Package httpd-tools.x86_64 0:2.2.31-1.8.amzn1 will be installed --> Processing Conflict: httpd24-2.4.18-1.64.amzn1.x86_64 conflicts httpd < 2.4.18 --> Restarting Dependency Resolution with new changes. --> Running transaction check ---> Package httpd24.x86_64 0:2.4.18-1.64.amzn1 will be updated ---> Package httpd24.x86_64 0:2.4.23-1.66.amzn1 will be an update --> Processing Dependency: httpd24-tools = 2.4.23-1.66.amzn1 for package: httpd24-2.4.23-1.66.amzn1.x86_64 --> Running transaction check ---> Package httpd24-tools.x86_64 0:2.4.18-1.64.amzn1 will be updated ---> Package httpd24-tools.x86_64 0:2.4.23-1.66.amzn1 will be an update --> Processing Conflict: httpd24-2.4.23-1.66.amzn1.x86_64 conflicts httpd < 2.4.23 --> Processing Conflict: httpd24-tools-2.4.23-1.66.amzn1.x86_64 conflicts httpd-tools < 2.4.23 --> Finished Dependency Resolution Error: httpd24-tools conflicts with httpd-tools-2.2.31-1.8.amzn1.x86_64 Error: httpd24 conflicts with httpd-2.2.31-1.8.amzn1.x86_64 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest [root@ip-61 ec2-user]# yum l [root@ip-61 ec2-user]# yum list installed | grep -in httpd 120:httpd24.x86_64 2.4.18-1.64.amzn1 @amzn-main 121:httpd24-tools.x86_64 2.4.18-1.64.amzn1 @amzn-main [root@ip-61 ec2-user]# yum remove httpd-tools-2.2.31-1.8.amzn1.x86_64 Loaded plugins: priorities, update-motd, upgrade-helper No Match for argument: httpd-tools-2.2.31-1.8.amzn1.x86_64 No Packages marked for removal [root@ip-61 ec2-user]# yum remove httpd-2.2.31-1.8.amzn1.x86_64 Loaded plugins: priorities, update-motd, upgrade-helper No Match for argument: httpd-2.2.31-1.8.amzn1.x86_64 No Packages marked for removal [root@ip-61 ec2-user]# ]# yum list installed | grep -in httpd 120:httpd24.x86_64 2.4.18-1.64.amzn1 @amzn-main 121:httpd24-tools.x86_64 2.4.18-1.64.amzn1 @amzn-main
|
apache, amazon-web-services, centos, redhat, apache2.4
| 2
| 5,810
| 4
|
https://stackoverflow.com/questions/40229096/install-mod-ssl-issue-apache-2-4-aws-linux
|
40,089,841
|
Error installing pyopenssl using pip
|
I am trying to install different packages using python pip, but i get this error: File "/usr/local/lib/python2.7/site-packages/pip-8.1.2-py2.7.egg/pip/_vendor/ cnx.set_tlsext_host_name(server_hostname) AttributeError: '_socketobject' object has no attribute 'set_tlsext_host_name' I some workaroundI founr I need to install pyoepnssl . When I try to manually install pyopenssl I get the error: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/local/include/python2.7 -c OpenSSL/ssl/connection.c -o build/temp.linux-x86_64-2.7/OpenSSL/ssl/connection.o OpenSSL/ssl/connection.c: In function âssl_Connection_set_contextâ: OpenSSL/ssl/connection.c:289: warning: implicit declaration of function âSSL_set_SSL_CTXâ OpenSSL/ssl/connection.c: In function âssl_Connection_get_servernameâ: OpenSSL/ssl/connection.c:313: error: âTLSEXT_NAMETYPE_host_nameâ undeclared (first use in this function) OpenSSL/ssl/connection.c:313: error: (Each undeclared identifier is reported only once OpenSSL/ssl/connection.c:313: error: for each function it appears in.) OpenSSL/ssl/connection.c:320: warning: implicit declaration of function âSSL_get_servernameâ OpenSSL/ssl/connection.c:320: warning: assignment makes pointer from integer without a cast OpenSSL/ssl/connection.c: In function âssl_Connection_set_tlsext_host_nameâ: OpenSSL/ssl/connection.c:346: warning: implicit declaration of function âSSL_set_tlsext_host_nameâ error: command 'gcc' failed with exit status 1 I already have installed all the libraries gcc, python-devel, gcc-c++, libffi-devel, openssl-devel. I am using Red Hat 4 I don't know if I am missing some library. Any advice. Thanks in advance
|
Error installing pyopenssl using pip I am trying to install different packages using python pip, but i get this error: File "/usr/local/lib/python2.7/site-packages/pip-8.1.2-py2.7.egg/pip/_vendor/ cnx.set_tlsext_host_name(server_hostname) AttributeError: '_socketobject' object has no attribute 'set_tlsext_host_name' I some workaroundI founr I need to install pyoepnssl . When I try to manually install pyopenssl I get the error: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/local/include/python2.7 -c OpenSSL/ssl/connection.c -o build/temp.linux-x86_64-2.7/OpenSSL/ssl/connection.o OpenSSL/ssl/connection.c: In function âssl_Connection_set_contextâ: OpenSSL/ssl/connection.c:289: warning: implicit declaration of function âSSL_set_SSL_CTXâ OpenSSL/ssl/connection.c: In function âssl_Connection_get_servernameâ: OpenSSL/ssl/connection.c:313: error: âTLSEXT_NAMETYPE_host_nameâ undeclared (first use in this function) OpenSSL/ssl/connection.c:313: error: (Each undeclared identifier is reported only once OpenSSL/ssl/connection.c:313: error: for each function it appears in.) OpenSSL/ssl/connection.c:320: warning: implicit declaration of function âSSL_get_servernameâ OpenSSL/ssl/connection.c:320: warning: assignment makes pointer from integer without a cast OpenSSL/ssl/connection.c: In function âssl_Connection_set_tlsext_host_nameâ: OpenSSL/ssl/connection.c:346: warning: implicit declaration of function âSSL_set_tlsext_host_nameâ error: command 'gcc' failed with exit status 1 I already have installed all the libraries gcc, python-devel, gcc-c++, libffi-devel, openssl-devel. I am using Red Hat 4 I don't know if I am missing some library. Any advice. Thanks in advance
|
python, linux, pip, redhat
| 2
| 5,926
| 1
|
https://stackoverflow.com/questions/40089841/error-installing-pyopenssl-using-pip
|
36,870,808
|
Using double quotes within double quotes
|
I have a file1.txt with the below contents: time="2016-04-25T17:43:11Z" level=info msg="SHA1 Fingerprint=9F:AD:D4:FD:22:24:20:A2:1E:0C:7F:D0:19:C5:80:42:66:56:AC:6F" I want the file to look as below: 9F:AD:D4:FD:22:24:20:A2:1E:0C:7F:D0:19:C5:80:42:66:56:AC:6F Actually, I need to pass the command as a string. That is why the bash command needs to be encapsulated in a string with double quotes. However, when i include " grep -Po '(?<=Fingerprint=)[^"]*' " I don't get the desired output. It seems that i need to escape the double quotes correctly.
|
Using double quotes within double quotes I have a file1.txt with the below contents: time="2016-04-25T17:43:11Z" level=info msg="SHA1 Fingerprint=9F:AD:D4:FD:22:24:20:A2:1E:0C:7F:D0:19:C5:80:42:66:56:AC:6F" I want the file to look as below: 9F:AD:D4:FD:22:24:20:A2:1E:0C:7F:D0:19:C5:80:42:66:56:AC:6F Actually, I need to pass the command as a string. That is why the bash command needs to be encapsulated in a string with double quotes. However, when i include " grep -Po '(?<=Fingerprint=)[^"]*' " I don't get the desired output. It seems that i need to escape the double quotes correctly.
|
linux, bash, shell, sh, redhat
| 2
| 200
| 1
|
https://stackoverflow.com/questions/36870808/using-double-quotes-within-double-quotes
|
27,833,644
|
Pythonic way to check if a package is installed or not
|
Pythonic way to check list of packages installed in Centos/Redhat? In a bash script, I'd do: rpm -qa | grep -w packagename
|
Pythonic way to check if a package is installed or not Pythonic way to check list of packages installed in Centos/Redhat? In a bash script, I'd do: rpm -qa | grep -w packagename
|
python, centos, redhat
| 2
| 5,347
| 5
|
https://stackoverflow.com/questions/27833644/pythonic-way-to-check-if-a-package-is-installed-or-not
|
18,653,987
|
Python + Django run under a different user on apache2 (httpd), Redhat
|
i' ve a django application binded to httpd (apache2) on Red hat, and it works well however i' d like to run it under a different username than apache, so if it writes to the filesystem the file' s owner should be newuser. I' m looking for a solution to achieve this. I tried to use httpd-itk (after this: [URL] ), but it complains about: permission denied: mod_wsgi (pid=31322): Unable to connect to WSGI daemon process 'myapp.djangoserver' on '/var/run/wsgi.31085.0.1.sock' after multiple attempts. After resolving this (by giving 777 permission for test for the file) i still have apache as the file' s owner. My conf file looks like this: <VirtualHost *:80> ServerName myapp ServerAlias myapp DocumentRoot /usr/share/myapp <Directory /usr/share/myapp> Order allow,deny Allow from all </Directory> WSGIDaemonProcess syntyma.djangoserver processes=10 threads=20 display-name=%{GROUP} WSGIProcessGroup myapp.djangoserver WSGIScriptAlias / /usr/share/myapp/apache/django.wsgi CustomLog logs/myapp-access.log combined ErrorLog logs/myapp-error.log LogLevel debug AssignUserId newuser newuser </VirtualHost> WSGISocketPrefix /var/run/wsgi , and the created testfile: ls -l /tmp/ggg -rw-r--r-- 1 apache apache 3 Sep 6 09:46 /tmp/ggg . How could i reach my goal with htttpd-itk or any other solution, like some suEXEC, or similar? Thanks.
|
Python + Django run under a different user on apache2 (httpd), Redhat i' ve a django application binded to httpd (apache2) on Red hat, and it works well however i' d like to run it under a different username than apache, so if it writes to the filesystem the file' s owner should be newuser. I' m looking for a solution to achieve this. I tried to use httpd-itk (after this: [URL] ), but it complains about: permission denied: mod_wsgi (pid=31322): Unable to connect to WSGI daemon process 'myapp.djangoserver' on '/var/run/wsgi.31085.0.1.sock' after multiple attempts. After resolving this (by giving 777 permission for test for the file) i still have apache as the file' s owner. My conf file looks like this: <VirtualHost *:80> ServerName myapp ServerAlias myapp DocumentRoot /usr/share/myapp <Directory /usr/share/myapp> Order allow,deny Allow from all </Directory> WSGIDaemonProcess syntyma.djangoserver processes=10 threads=20 display-name=%{GROUP} WSGIProcessGroup myapp.djangoserver WSGIScriptAlias / /usr/share/myapp/apache/django.wsgi CustomLog logs/myapp-access.log combined ErrorLog logs/myapp-error.log LogLevel debug AssignUserId newuser newuser </VirtualHost> WSGISocketPrefix /var/run/wsgi , and the created testfile: ls -l /tmp/ggg -rw-r--r-- 1 apache apache 3 Sep 6 09:46 /tmp/ggg . How could i reach my goal with htttpd-itk or any other solution, like some suEXEC, or similar? Thanks.
|
python, django, apache, apache2, redhat
| 2
| 2,246
| 1
|
https://stackoverflow.com/questions/18653987/python-django-run-under-a-different-user-on-apache2-httpd-redhat
|
13,449,535
|
How to disable modules in EAP 6?
|
I want to use RedHat EAP 6 for commercial support on HornetQ. Can't find anywhere how to disable every non-relevant module (e.g. EJB's etc). Any ideas?
|
How to disable modules in EAP 6? I want to use RedHat EAP 6 for commercial support on HornetQ. Can't find anywhere how to disable every non-relevant module (e.g. EJB's etc). Any ideas?
|
jboss, redhat, hornetq
| 2
| 3,337
| 1
|
https://stackoverflow.com/questions/13449535/how-to-disable-modules-in-eap-6
|
77,835,879
|
Installing Python 3.12 with custom TCL/TK on RHEL 7
|
I am trying to build Python 3.12 with TCL/TK on RHEL 7. I have 0 root privileges. My system build is HORRIBLY outdated. I have no control over this and cannot update/use package installers, which is why I've gone through this 'compiling from source' endeavor. After digging around I found that there WAS a configure option in the past for --with-tcltk... but those options were removed. After looking in the whatsnew, I found that you can specify the custom locations using the envvars TCLTK_CFLAGS and TCLTK_LIBS. My tcl/tk installations are not in the same directory, they are under $HOME/.../tcl8.6 and $HOME/.../tk8.6 respectively. (I'm using abs path on the envvars) I know I'm close, but after building using these variables I still can't import tkinter after compiling python3.12.1 on RHEL 7. I have not found any information as to how to use those variables, which paths I need to specify/what is necessary or any examples of how to do this.
|
Installing Python 3.12 with custom TCL/TK on RHEL 7 I am trying to build Python 3.12 with TCL/TK on RHEL 7. I have 0 root privileges. My system build is HORRIBLY outdated. I have no control over this and cannot update/use package installers, which is why I've gone through this 'compiling from source' endeavor. After digging around I found that there WAS a configure option in the past for --with-tcltk... but those options were removed. After looking in the whatsnew, I found that you can specify the custom locations using the envvars TCLTK_CFLAGS and TCLTK_LIBS. My tcl/tk installations are not in the same directory, they are under $HOME/.../tcl8.6 and $HOME/.../tk8.6 respectively. (I'm using abs path on the envvars) I know I'm close, but after building using these variables I still can't import tkinter after compiling python3.12.1 on RHEL 7. I have not found any information as to how to use those variables, which paths I need to specify/what is necessary or any examples of how to do this.
|
python, redhat, rhel7, python-3.12
| 2
| 1,311
| 1
|
https://stackoverflow.com/questions/77835879/installing-python-3-12-with-custom-tcl-tk-on-rhel-7
|
76,998,416
|
Sending logs of (rootless) podman containers to ELK?
|
I am struggling with the $question for a while and it is time to ask for some guidance. We have 10+ containers that runs on different RHEL VMs deployed via ansible as systemd service (in other words, there is no cubernetes or other container orchestration service on top of /next to podman installed) These containers are running as rootless containers. Unfortunately the docker (podman) socket is not active on the VMs (albeit we can turn it on). All the VMs has filebeat installation. We have an ELK stack separately deployed. What I have found during working on this: the podman (installation here) supports only k8s-file and journald logging (problem #1: the easiest way would be to use json-file logging driver combined with docker/podman socket, this could not be achieved here) the podman socket belongs to the podman user who runs those rootless containers so it is not really visible for the others, even the root could not see the running containers without the podman socket – I was able to work this around (so the root was able to list the running containers), but I don’t think so / could not believe that this should be the industrial standard solution switching to journald and using the journalbeat: this may work, but I don’t know that this can be combined with container metadata collecting that would be required to be able to distinguish the VM logs from the container logs I am looking for a solution for this setup that would make the centralized logging working in the asiest way. For now I would be grateful for some hints about a working concept. Thank you in advance.
|
Sending logs of (rootless) podman containers to ELK? I am struggling with the $question for a while and it is time to ask for some guidance. We have 10+ containers that runs on different RHEL VMs deployed via ansible as systemd service (in other words, there is no cubernetes or other container orchestration service on top of /next to podman installed) These containers are running as rootless containers. Unfortunately the docker (podman) socket is not active on the VMs (albeit we can turn it on). All the VMs has filebeat installation. We have an ELK stack separately deployed. What I have found during working on this: the podman (installation here) supports only k8s-file and journald logging (problem #1: the easiest way would be to use json-file logging driver combined with docker/podman socket, this could not be achieved here) the podman socket belongs to the podman user who runs those rootless containers so it is not really visible for the others, even the root could not see the running containers without the podman socket – I was able to work this around (so the root was able to list the running containers), but I don’t think so / could not believe that this should be the industrial standard solution switching to journald and using the journalbeat: this may work, but I don’t know that this can be combined with container metadata collecting that would be required to be able to distinguish the VM logs from the container logs I am looking for a solution for this setup that would make the centralized logging working in the asiest way. For now I would be grateful for some hints about a working concept. Thank you in advance.
|
elasticsearch, logstash, redhat, filebeat, podman
| 2
| 4,303
| 2
|
https://stackoverflow.com/questions/76998416/sending-logs-of-rootless-podman-containers-to-elk
|
73,300,929
|
on linux, how do I list only mounted removable media / device?
|
I know we can list all mounted devices using the mount command. or even the df command. But how can we know if the listed device is removable or not, such as USB, CMROM, External Hardisk, etc? For this question, we can start with how to do it on SUSE or RedHat. Thanks!
|
on linux, how do I list only mounted removable media / device? I know we can list all mounted devices using the mount command. or even the df command. But how can we know if the listed device is removable or not, such as USB, CMROM, External Hardisk, etc? For this question, we can start with how to do it on SUSE or RedHat. Thanks!
|
linux, bash, redhat, mount, sles
| 2
| 3,932
| 2
|
https://stackoverflow.com/questions/73300929/on-linux-how-do-i-list-only-mounted-removable-media-device
|
56,880,786
|
icudt error while installing stringi package from r in linux offline
|
I have downloaded stringi_1.4.3.tar.gz package in my System (RedHat Linux 7), but when I am trying to install offline it I am getting error as below: Execution halted *** icudt download failed. stopping. ERROR: configuration failed for package ‘stringi’ This is a new environment RedHatLinux 7.x , R version is 3.6, here I am doing a testing for offline installation of R set up and the R packages, wherein I encountered this error. I have already tried downloading older version of stringi , but it didn't work. checking with pkg-config for the system ICU4C... 50.1.2 checking for ICU4C >= 52... no * ICU4C 50.1.2 has been detected Minimal requirements, i.e., ICU4C >= 52, are not met Trying with "standard" fallback flags checking whether we may build an ICU4C-based project... yes checking programmatically for sufficient U_ICU_VERSION_MAJOR_NUM... no * The available ICU4C cannot be used checking whether we may compile src/icu61/common/putil.cpp... yes checking whether we may compile src/icu61/i18n/number_affixutils.cpp... yes checking whether we can fetch icudt... downloading the ICU data library (icudt) output path: icu61/data/icudt61l.zip trying URL ' [URL] ' Error in download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb"): cannot open URL XXX trying URL ' [URL] ' Error in download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb"): cannot open URL XXX trying URL ' [URL] ' Error in download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb"): cannot open URL XXX trying URL ' [URL] ' Error in download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb"): cannot open URL XXX trying URL ' [URL] ' Error in download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb"): cannot open URL XXX trying URL ' [URL] ' Error in download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb"): cannot open URL XXX icudt download failed Error: Stopping on error In addition: Warning messages: 1: In download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb") : XXX status was 'Couldn't connect to server' 2: In download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb") : URL XXX status was 'Couldn't connect to server' 3: In download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb") : URL XXX status was 'Couldn't connect to server' 4: In download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb") : URL XXX status was 'Couldn't connect to server' 5: In download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb") : URL XXX status was 'Couldn't connect to server' 6: In download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb") : URL XXX status was 'Couldn't connect to server' Execution halted *** icudt download failed. stopping. ERROR: configuration failed for package ‘stringi’ * removing ‘/usr/lib64/R/library/stringi’ I downloaded and installed it on windows 10, there ii worked as expected.I want stringi package because it has dependency on other packages.Please Help
|
icudt error while installing stringi package from r in linux offline I have downloaded stringi_1.4.3.tar.gz package in my System (RedHat Linux 7), but when I am trying to install offline it I am getting error as below: Execution halted *** icudt download failed. stopping. ERROR: configuration failed for package ‘stringi’ This is a new environment RedHatLinux 7.x , R version is 3.6, here I am doing a testing for offline installation of R set up and the R packages, wherein I encountered this error. I have already tried downloading older version of stringi , but it didn't work. checking with pkg-config for the system ICU4C... 50.1.2 checking for ICU4C >= 52... no * ICU4C 50.1.2 has been detected Minimal requirements, i.e., ICU4C >= 52, are not met Trying with "standard" fallback flags checking whether we may build an ICU4C-based project... yes checking programmatically for sufficient U_ICU_VERSION_MAJOR_NUM... no * The available ICU4C cannot be used checking whether we may compile src/icu61/common/putil.cpp... yes checking whether we may compile src/icu61/i18n/number_affixutils.cpp... yes checking whether we can fetch icudt... downloading the ICU data library (icudt) output path: icu61/data/icudt61l.zip trying URL ' [URL] ' Error in download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb"): cannot open URL XXX trying URL ' [URL] ' Error in download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb"): cannot open URL XXX trying URL ' [URL] ' Error in download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb"): cannot open URL XXX trying URL ' [URL] ' Error in download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb"): cannot open URL XXX trying URL ' [URL] ' Error in download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb"): cannot open URL XXX trying URL ' [URL] ' Error in download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb"): cannot open URL XXX icudt download failed Error: Stopping on error In addition: Warning messages: 1: In download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb") : XXX status was 'Couldn't connect to server' 2: In download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb") : URL XXX status was 'Couldn't connect to server' 3: In download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb") : URL XXX status was 'Couldn't connect to server' 4: In download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb") : URL XXX status was 'Couldn't connect to server' 5: In download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb") : URL XXX status was 'Couldn't connect to server' 6: In download.file(paste(href, fname, sep = ""), icudtzipfname, mode = "wb") : URL XXX status was 'Couldn't connect to server' Execution halted *** icudt download failed. stopping. ERROR: configuration failed for package ‘stringi’ * removing ‘/usr/lib64/R/library/stringi’ I downloaded and installed it on windows 10, there ii worked as expected.I want stringi package because it has dependency on other packages.Please Help
|
linux, redhat, offline, stringi, installation-package
| 2
| 3,548
| 1
|
https://stackoverflow.com/questions/56880786/icudt-error-while-installing-stringi-package-from-r-in-linux-offline
|
54,398,574
|
How can I install Redhat OpenJDK 11 on CentOS 7+?
|
As per the Oracle OpenJDK policy, there will not have any LTS support anymore, but Redhat OpenJDK will continue to have LTS support so far we have seen. Our current application was based on CentOS 7 and Oracle JDK-8 and wants to migrate some sort of free JDK-11 distributions with LTS support for our production server. If you guys have any other alternative solution, please let me know.
|
How can I install Redhat OpenJDK 11 on CentOS 7+? As per the Oracle OpenJDK policy, there will not have any LTS support anymore, but Redhat OpenJDK will continue to have LTS support so far we have seen. Our current application was based on CentOS 7 and Oracle JDK-8 and wants to migrate some sort of free JDK-11 distributions with LTS support for our production server. If you guys have any other alternative solution, please let me know.
|
java, centos, redhat, redhat-openjdk, java-11
| 2
| 2,451
| 1
|
https://stackoverflow.com/questions/54398574/how-can-i-install-redhat-openjdk-11-on-centos-7
|
51,930,825
|
Is there any dependance with Red Hat using OpenShift
|
In my company, we are evaluating OpenShift as PaaS platform. Beside the fact that Red Hat is a requirement to install OpenShift, is there any other dependance to Red Hat when deploying docker containers?
|
Is there any dependance with Red Hat using OpenShift In my company, we are evaluating OpenShift as PaaS platform. Beside the fact that Red Hat is a requirement to install OpenShift, is there any other dependance to Red Hat when deploying docker containers?
|
docker, openshift, redhat, devops
| 2
| 225
| 1
|
https://stackoverflow.com/questions/51930825/is-there-any-dependance-with-red-hat-using-openshift
|
42,542,577
|
sed: test for $pattern in $line before adding it
|
Several examples exist of how to use sed to add text to the end of a line based on matching a general pattern. Here's one example . In that example, the poster starts with somestuff... all: thing otherthing some other stuff and wants to add to the end of all: , like this: somestuff... all: thing otherthing anotherthing some other stuff All well and good. But, what happens if anotherthing is already there?! I'd like to find the line starting with all: , test for the existence of anotherthing , and only add it if it is missing from the line. How might I do that? My specific case is testing kernel lines in grub.conf for the existence of boot= and fips=1 , and adding either or both of those arguments only if they're not already in the line. (I want the search/add to be idempotent .)
|
sed: test for $pattern in $line before adding it Several examples exist of how to use sed to add text to the end of a line based on matching a general pattern. Here's one example . In that example, the poster starts with somestuff... all: thing otherthing some other stuff and wants to add to the end of all: , like this: somestuff... all: thing otherthing anotherthing some other stuff All well and good. But, what happens if anotherthing is already there?! I'd like to find the line starting with all: , test for the existence of anotherthing , and only add it if it is missing from the line. How might I do that? My specific case is testing kernel lines in grub.conf for the existence of boot= and fips=1 , and adding either or both of those arguments only if they're not already in the line. (I want the search/add to be idempotent .)
|
bash, sed, redhat
| 2
| 795
| 3
|
https://stackoverflow.com/questions/42542577/sed-test-for-pattern-in-line-before-adding-it
|
41,940,438
|
Session Replication in Wildfly 10.1
|
I am trying to enable session replication in my Wildfly 10.1 application with distributable WARs. I am running on 2 instances of RedHat 7.2 on a managed host provider with full access to the OS and firewall. I don't have access to the router which our traffic is served, however the host has confirmed that multicast UDP is enabled. I have SeLinux set to minimum, the ports are open in iptables, the multicast IPs have been subscribed, and my wildfly domain mode configuration is using a cloned full-ha profile with full-ha-sockets: Here is the domain profile, which is vanilla with the exception of datasources: <profile name="ha-dev2"> <subsystem xmlns="urn:jboss:domain:logging:3.0"> <add-logging-api-dependencies value="false"/> <console-handler name="CONSOLE"> <level name="INFO"/> <formatter> <named-formatter name="COLOR-PATTERN"/> </formatter> </console-handler> <periodic-rotating-file-handler name="FILE" autoflush="true"> <formatter> <named-formatter name="PATTERN"/> </formatter> <file relative-to="jboss.server.log.dir" path="server.log"/> <suffix value=".yyyy-MM-dd"/> <append value="true"/> </periodic-rotating-file-handler> <logger category="com.arjuna"> <level name="WARN"/> </logger> <logger category="org.jboss.as.config"> <level name="DEBUG"/> </logger> <logger category="sun.rmi"> <level name="WARN"/> </logger> <root-logger> <level name="INFO"/> <handlers> <handler name="CONSOLE"/> <handler name="FILE"/> </handlers> </root-logger> <formatter name="PATTERN"> <pattern-formatter pattern="%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n"/> </formatter> <formatter name="COLOR-PATTERN"> <pattern-formatter pattern="%K{level}%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n"/> </formatter> </subsystem> <subsystem xmlns="urn:jboss:domain:batch-jberet:1.0"> <default-job-repository name="in-memory"/> <default-thread-pool name="batch"/> <job-repository name="in-memory"> <in-memory/> </job-repository> <thread-pool name="batch"> <max-threads count="10"/> <keepalive-time time="30" unit="seconds"/> </thread-pool> </subsystem> <subsystem xmlns="urn:jboss:domain:bean-validation:1.0"/> <subsystem xmlns="urn:jboss:domain:datasources:4.0"> <datasources> <datasource jndi-name="java:jboss/datasources/ExampleDS" pool-name="ExampleDS" enabled="true" use-java-context="true"> <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE</connection-url> <driver>h2</driver> <security> <user-name>sa</user-name> <password>sa</password> </security> </datasource> <!-- DATASOURCES REDACTED --> <drivers> <driver name="h2" module="com.h2database.h2"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem> <subsystem xmlns="urn:jboss:domain:ee:4.0"> <spec-descriptor-property-replacement>false</spec-descriptor-property-replacement> <concurrent> <context-services> <context-service name="default" jndi-name="java:jboss/ee/concurrency/context/default" use-transaction-setup-provider="true"/> </context-services> <managed-thread-factories> <managed-thread-factory name="default" jndi-name="java:jboss/ee/concurrency/factory/default" context-service="default"/> </managed-thread-factories> <managed-executor-services> <managed-executor-service name="default" jndi-name="java:jboss/ee/concurrency/executor/default" context-service="default" hung-task-threshold="60000" keepalive-time="5000"/> </managed-executor-services> <managed-scheduled-executor-services> <managed-scheduled-executor-service name="default" jndi-name="java:jboss/ee/concurrency/scheduler/default" context-service="default" hung-task-threshold="60000" keepalive-time="3000"/> </managed-scheduled-executor-services> </concurrent> <default-bindings context-service="java:jboss/ee/concurrency/context/default" datasource="java:jboss/datasources/ExampleDS" jms-connection-factory="java:jboss/DefaultJMSConnectionFactory" managed-executor-service="java:jboss/ee/concurrency/executor/default" managed-scheduled-executor-service="java:jboss/ee/concurrency/scheduler/default" managed-thread-factory="java:jboss/ee/concurrency/factory/default"/> </subsystem> <subsystem xmlns="urn:jboss:domain:ejb3:4.0"> <session-bean> <stateless> <bean-instance-pool-ref pool-name="slsb-strict-max-pool"/> </stateless> <stateful default-access-timeout="5000" cache-ref="distributable" passivation-disabled-cache-ref="simple"/> <singleton default-access-timeout="5000"/> </session-bean> <mdb> <resource-adapter-ref resource-adapter-name="${ejb.resource-adapter-name:activemq-ra.rar}"/> <bean-instance-pool-ref pool-name="mdb-strict-max-pool"/> </mdb> <pools> <bean-instance-pools> <strict-max-pool name="slsb-strict-max-pool" derive-size="from-worker-pools" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> <strict-max-pool name="mdb-strict-max-pool" derive-size="from-cpu-count" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> </bean-instance-pools> </pools> <caches> <cache name="simple"/> <cache name="distributable" passivation-store-ref="infinispan" aliases="passivating clustered"/> </caches> <passivation-stores> <passivation-store name="infinispan" cache-container="ejb" max-size="10000"/> </passivation-stores> <async thread-pool-name="default"/> <timer-service thread-pool-name="default" default-data-store="default-file-store"> <data-stores> <file-data-store name="default-file-store" path="timer-service-data" relative-to="jboss.server.data.dir"/> </data-stores> </timer-service> <remote connector-ref="http-remoting-connector" thread-pool-name="default"/> <thread-pools> <thread-pool name="default"> <max-threads count="10"/> <keepalive-time time="100" unit="milliseconds"/> </thread-pool> </thread-pools> <iiop enable-by-default="false" use-qualified-name="false"/> <default-security-domain value="other"/> <default-missing-method-permissions-deny-access value="true"/> <log-system-exceptions value="true"/> </subsystem> <subsystem xmlns="urn:jboss:domain:io:1.1"> <worker name="default"/> <buffer-pool name="default"/> </subsystem> <subsystem xmlns="urn:jboss:domain:infinispan:4.0"> <cache-container name="server" aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server"> <transport lock-timeout="60000"/> <replicated-cache name="default" mode="SYNC"> <transaction mode="BATCH"/> </replicated-cache> </cache-container> <cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan"> <transport lock-timeout="60000"/> <distributed-cache name="dist" mode="ASYNC" l1-lifespan="0" owners="2"> <locking isolation="REPEATABLE_READ"/> <transaction mode="BATCH"/> <file-store/> </distributed-cache> <distributed-cache name="concurrent" mode="SYNC" l1-lifespan="0" owners="2"> <file-store/> </distributed-cache> </cache-container> <cache-container name="ejb" aliases="sfsb" default-cache="dist" module="org.wildfly.clustering.ejb.infinispan"> <transport lock-timeout="60000"/> <distributed-cache name="dist" mode="ASYNC" l1-lifespan="0" owners="2"> <locking isolation="REPEATABLE_READ"/> <transaction mode="BATCH"/> <file-store/> </distributed-cache> </cache-container> <cache-container name="hibernate" default-cache="local-query" module="org.hibernate.infinispan"> <transport lock-timeout="60000"/> <local-cache name="local-query"> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </local-cache> <invalidation-cache name="entity" mode="SYNC"> <transaction mode="NON_XA"/> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </invalidation-cache> <replicated-cache name="timestamps" mode="ASYNC"/> </cache-container> </subsystem> <subsystem xmlns="urn:jboss:domain:iiop-openjdk:1.0"> <orb socket-binding="iiop" ssl-socket-binding="iiop-ssl"/> <initializers security="identity" transactions="spec"/> </subsystem> <subsystem xmlns="urn:jboss:domain:jaxrs:1.0"/> <subsystem xmlns="urn:jboss:domain:jca:4.0"> <archive-validation enabled="true" fail-on-error="true" fail-on-warn="false"/> <bean-validation enabled="true"/> <default-workmanager> <short-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </short-running-threads> <long-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </long-running-threads> </default-workmanager> <cached-connection-manager/> </subsystem> <subsystem xmlns="urn:jboss:domain:jdr:1.0"/> <subsystem xmlns="urn:jboss:domain:jgroups:4.0"> <channels default="ee"> <channel name="ee" stack="udp"/> </channels> <stacks> <stack name="udp"> <transport type="UDP" socket-binding="jgroups-udp"/> <protocol type="PING"/> <protocol type="MERGE3"/> <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/> <protocol type="FD_ALL"/> <protocol type="VERIFY_SUSPECT"/> <protocol type="pbcast.NAKACK2"/> <protocol type="UNICAST3"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="UFC"/> <protocol type="MFC"/> <protocol type="FRAG2"/> </stack> <stack name="tcp"> <transport type="TCP" socket-binding="jgroups-tcp"/> <protocol type="MPING" socket-binding="jgroups-mping"/> <protocol type="MERGE3"/> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/> <protocol type="FD"/> <protocol type="VERIFY_SUSPECT"/> <protocol type="pbcast.NAKACK2"/> <protocol type="UNICAST3"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="MFC"/> <protocol type="FRAG2"/> </stack> </stacks> </subsystem> <subsystem xmlns="urn:jboss:domain:jmx:1.3"> <expose-resolved-model/> <expose-expression-model/> </subsystem> <subsystem xmlns="urn:jboss:domain:jpa:1.1"> <jpa default-datasource="" default-extended-persistence-inheritance="DEEP"/> </subsystem> <subsystem xmlns="urn:jboss:domain:jsf:1.0"/> <subsystem xmlns="urn:jboss:domain:jsr77:1.0"/> <subsystem xmlns="urn:jboss:domain:mail:2.0"> <mail-session name="default" jndi-name="java:jboss/mail/Default"> <smtp-server outbound-socket-binding-ref="mail-smtp"/> </mail-session> </subsystem> <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0"> <server name="default"> <cluster password="${jboss.messaging.cluster.password:@password@}"/> <bindings-directory/> <journal-directory/> <large-messages-directory/> <paging-directory/> <security-setting name="#"> <role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/> </security-setting> <address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10" redistribution-delay="1000"/> <http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/> <http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput"> <param name="batch-delay" value="50"/> </http-connector> <in-vm-connector name="in-vm" server-id="0"/> <http-acceptor name="http-acceptor" http-listener="default"/> <http-acceptor name="http-acceptor-throughput" http-listener="default"> <param name="batch-delay" value="50"/> <param name="direct-deliver" value="false"/> </http-acceptor> <in-vm-acceptor name="in-vm" server-id="0"/> <broadcast-group name="bg-group1" jgroups-channel="activemq-cluster" connectors="http-connector"/> <discovery-group name="dg-group1" jgroups-channel="activemq-cluster"/> <cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/> <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/> <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/> <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/> <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/> <pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/> </server> </subsystem> <subsystem xmlns="urn:jboss:domain:modcluster:2.0"> <mod-cluster-config advertise-socket="modcluster" balancer="dev-ha-server-group" connector="ajp"> <dynamic-load-provider> <load-metric type="busyness"/> </dynamic-load-provider> </mod-cluster-config> </subsystem> <subsystem xmlns="urn:jboss:domain:naming:2.0"> <remote-naming/> </subsystem> <subsystem xmlns="urn:jboss:domain:pojo:1.0"/> <subsystem xmlns="urn:jboss:domain:remoting:3.0"> <http-connector name="http-remoting-connector" connector-ref="default" security-realm="ApplicationRealm"/> </subsystem> <subsystem xmlns="urn:jboss:domain:resource-adapters:4.0"/> <subsystem xmlns="urn:jboss:domain:request-controller:1.0"/> <subsystem xmlns="urn:jboss:domain:sar:1.0"/> <subsystem xmlns="urn:jboss:domain:security:1.2"> <security-domains> <security-domain name="other" cache-type="default"> <authentication> <login-module code="Remoting" flag="optional"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> <login-module code="RealmDirect" flag="required"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> </authentication> </security-domain> <security-domain name="jboss-web-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> <security-domain name="jboss-ejb-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> <security-domain name="jaspitest" cache-type="default"> <authentication-jaspi> <login-module-stack name="dummy"> <login-module code="Dummy" flag="optional"/> </login-module-stack> <auth-module code="Dummy"/> </authentication-jaspi> </security-domain> </security-domains> </subsystem> <subsystem xmlns="urn:jboss:domain:security-manager:1.0"> <deployment-permissions> <maximum-set> <permission class="java.security.AllPermission"/> </maximum-set> </deployment-permissions> </subsystem> <subsystem xmlns="urn:jboss:domain:singleton:1.0"> <singleton-policies default="default"> <singleton-policy name="default" cache-container="server"> <simple-election-policy/> </singleton-policy> </singleton-policies> </subsystem> <subsystem xmlns="urn:jboss:domain:transactions:3.0"> <core-environment> <process-id> <uuid/> </process-id> </core-environment> <recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/> </subsystem> <subsystem xmlns="urn:jboss:domain:undertow:3.1"> <buffer-cache name="default"/> <server name="default-server"> <ajp-listener name="ajp" socket-binding="ajp" max-post-size="26214400"/> <http-listener name="default" socket-binding="http" max-post-size="26214400" redirect-socket="https" enable-http2="true"/> <https-listener name="https" socket-binding="https" max-post-size="26214400" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <filter-ref name="server-header"/> <filter-ref name="x-powered-by-header"/> <filter-ref name="gzipFilter" predicate="exists['%{o,Content-Type}'] and regex[pattern='(?:application/javascript|text/css|text/html|text/xml|application/json)(;.*)?', value=%{o,Content-Type}, full-match=true]"/> </host> </server> <servlet-container name="default"> <jsp-config trim-spaces="true"/> <websockets/> </servlet-container> <handlers> <file name="welcome-content" path="${jboss.home.dir}/welcome-content"/> </handlers> <filters> <response-header name="server-header" header-name="Server" header-value="WildFly/10"/> <response-header name="x-powered-by-header" header-name="X-Powered-By" header-value="Undertow/1"/> <gzip name="gzipFilter"/> </filters> </subsystem> <subsystem xmlns="urn:jboss:domain:webservices:2.0"> <wsdl-host>${jboss.bind.address:127.0.0.1}</wsdl-host> <endpoint-config name="Standard-Endpoint-Config"/> <endpoint-config name="Recording-Endpoint-Config"> <pre-handler-chain name="recording-handlers" protocol-bindings="##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM"> <handler name="RecordingHandler" class="org.jboss.ws.common.invocation.RecordingServerHandler"/> </pre-handler-chain> </endpoint-config> <client-config name="Standard-Client-Config"/> </subsystem> <subsystem xmlns="urn:jboss:domain:weld:3.0"/> </profile> This the socket binding group: <socket-binding-group name="dev-full-ha-sockets" default-interface="public"> <socket-binding name="ajp" port="${jboss.ajp.port:8009}"/> <socket-binding name="http" port="${jboss.http.port:8080}"/> <socket-binding name="https" port="${jboss.https.port:18443}"/> <socket-binding name="iiop" interface="unsecure" port="3528"/> <socket-binding name="iiop-ssl" interface="unsecure" port="3529"/> <socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/> <socket-binding name="jgroups-tcp" port="7600"/> <socket-binding name="jgroups-tcp-fd" port="57600"/> <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/> <socket-binding name="jgroups-udp-fd" port="54200"/> <socket-binding name="modcluster" port="0" multicast-address="224.0.1.105" multicast-port="23364"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group> And finally, the server-group: <server-group name="dev-ha-server-group" profile="ha-dev2"> <jvm name="default"> <heap size="256m" max-size="1024m"/> </jvm> <socket-binding-group ref="dev-full-ha-sockets"/> <deployments> <!-- Deployments redacted --> </deployments> </server-group> Using TCPDump, I can see traffic for modcluster from both of my servers. I have also run the McastReceiverTest (src: [URL] ) via: java org.jgroups.tests.McastReceiverTest -mcast_addr 230.0.0.4 -port 45700 and have used the following command: printf "GET / HTTP/1.0\r\n\r\n" | nc -vu 230.0.0.4 45700 and see the UDP traffic coming through. I am not seeing anything for the jgroups from wildfly though. In my logs, when I look for the subscribed channels, I see "Received new cluster view for channel server: [app-one:server-one|0] (1) [app-one:server-one]" But I never see my second server on either instance. When I test, I have an F5 load balancer in front of both RedHat instances. I have a few assumptions that I am hoping to clear up: Do I need to have Apache enabled for Modcluster? I'd assume not as it appears to be used for load balancing if you don't have a separate balancing mechanism. Is Session Replication the same as running a "hot-hot" scenario so that 2 applications can run and keep a session alive? My understanding is that I should be able to log into the application on server A and server B would be able to keep that session alive as well. We are trying to prevent the need for session stickyness. Can this run in a multi-server environment or is it only for 2 instances on the same local machine? I would assume that it should not matter if this is across a local machine or on 2 separate physicals/vms behind the same firewall. Thank you. I found a solution which has been eluding me. The problem with my configuration was in my bind IP. I was using 127.0.0.1 for my jboss.bind.address as I had Apache in front of my Wildfly configuration. I changed the bind IP to the IP of the NIC (in my case it was a 10.* address).
|
Session Replication in Wildfly 10.1 I am trying to enable session replication in my Wildfly 10.1 application with distributable WARs. I am running on 2 instances of RedHat 7.2 on a managed host provider with full access to the OS and firewall. I don't have access to the router which our traffic is served, however the host has confirmed that multicast UDP is enabled. I have SeLinux set to minimum, the ports are open in iptables, the multicast IPs have been subscribed, and my wildfly domain mode configuration is using a cloned full-ha profile with full-ha-sockets: Here is the domain profile, which is vanilla with the exception of datasources: <profile name="ha-dev2"> <subsystem xmlns="urn:jboss:domain:logging:3.0"> <add-logging-api-dependencies value="false"/> <console-handler name="CONSOLE"> <level name="INFO"/> <formatter> <named-formatter name="COLOR-PATTERN"/> </formatter> </console-handler> <periodic-rotating-file-handler name="FILE" autoflush="true"> <formatter> <named-formatter name="PATTERN"/> </formatter> <file relative-to="jboss.server.log.dir" path="server.log"/> <suffix value=".yyyy-MM-dd"/> <append value="true"/> </periodic-rotating-file-handler> <logger category="com.arjuna"> <level name="WARN"/> </logger> <logger category="org.jboss.as.config"> <level name="DEBUG"/> </logger> <logger category="sun.rmi"> <level name="WARN"/> </logger> <root-logger> <level name="INFO"/> <handlers> <handler name="CONSOLE"/> <handler name="FILE"/> </handlers> </root-logger> <formatter name="PATTERN"> <pattern-formatter pattern="%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n"/> </formatter> <formatter name="COLOR-PATTERN"> <pattern-formatter pattern="%K{level}%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n"/> </formatter> </subsystem> <subsystem xmlns="urn:jboss:domain:batch-jberet:1.0"> <default-job-repository name="in-memory"/> <default-thread-pool name="batch"/> <job-repository name="in-memory"> <in-memory/> </job-repository> <thread-pool name="batch"> <max-threads count="10"/> <keepalive-time time="30" unit="seconds"/> </thread-pool> </subsystem> <subsystem xmlns="urn:jboss:domain:bean-validation:1.0"/> <subsystem xmlns="urn:jboss:domain:datasources:4.0"> <datasources> <datasource jndi-name="java:jboss/datasources/ExampleDS" pool-name="ExampleDS" enabled="true" use-java-context="true"> <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE</connection-url> <driver>h2</driver> <security> <user-name>sa</user-name> <password>sa</password> </security> </datasource> <!-- DATASOURCES REDACTED --> <drivers> <driver name="h2" module="com.h2database.h2"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem> <subsystem xmlns="urn:jboss:domain:ee:4.0"> <spec-descriptor-property-replacement>false</spec-descriptor-property-replacement> <concurrent> <context-services> <context-service name="default" jndi-name="java:jboss/ee/concurrency/context/default" use-transaction-setup-provider="true"/> </context-services> <managed-thread-factories> <managed-thread-factory name="default" jndi-name="java:jboss/ee/concurrency/factory/default" context-service="default"/> </managed-thread-factories> <managed-executor-services> <managed-executor-service name="default" jndi-name="java:jboss/ee/concurrency/executor/default" context-service="default" hung-task-threshold="60000" keepalive-time="5000"/> </managed-executor-services> <managed-scheduled-executor-services> <managed-scheduled-executor-service name="default" jndi-name="java:jboss/ee/concurrency/scheduler/default" context-service="default" hung-task-threshold="60000" keepalive-time="3000"/> </managed-scheduled-executor-services> </concurrent> <default-bindings context-service="java:jboss/ee/concurrency/context/default" datasource="java:jboss/datasources/ExampleDS" jms-connection-factory="java:jboss/DefaultJMSConnectionFactory" managed-executor-service="java:jboss/ee/concurrency/executor/default" managed-scheduled-executor-service="java:jboss/ee/concurrency/scheduler/default" managed-thread-factory="java:jboss/ee/concurrency/factory/default"/> </subsystem> <subsystem xmlns="urn:jboss:domain:ejb3:4.0"> <session-bean> <stateless> <bean-instance-pool-ref pool-name="slsb-strict-max-pool"/> </stateless> <stateful default-access-timeout="5000" cache-ref="distributable" passivation-disabled-cache-ref="simple"/> <singleton default-access-timeout="5000"/> </session-bean> <mdb> <resource-adapter-ref resource-adapter-name="${ejb.resource-adapter-name:activemq-ra.rar}"/> <bean-instance-pool-ref pool-name="mdb-strict-max-pool"/> </mdb> <pools> <bean-instance-pools> <strict-max-pool name="slsb-strict-max-pool" derive-size="from-worker-pools" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> <strict-max-pool name="mdb-strict-max-pool" derive-size="from-cpu-count" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> </bean-instance-pools> </pools> <caches> <cache name="simple"/> <cache name="distributable" passivation-store-ref="infinispan" aliases="passivating clustered"/> </caches> <passivation-stores> <passivation-store name="infinispan" cache-container="ejb" max-size="10000"/> </passivation-stores> <async thread-pool-name="default"/> <timer-service thread-pool-name="default" default-data-store="default-file-store"> <data-stores> <file-data-store name="default-file-store" path="timer-service-data" relative-to="jboss.server.data.dir"/> </data-stores> </timer-service> <remote connector-ref="http-remoting-connector" thread-pool-name="default"/> <thread-pools> <thread-pool name="default"> <max-threads count="10"/> <keepalive-time time="100" unit="milliseconds"/> </thread-pool> </thread-pools> <iiop enable-by-default="false" use-qualified-name="false"/> <default-security-domain value="other"/> <default-missing-method-permissions-deny-access value="true"/> <log-system-exceptions value="true"/> </subsystem> <subsystem xmlns="urn:jboss:domain:io:1.1"> <worker name="default"/> <buffer-pool name="default"/> </subsystem> <subsystem xmlns="urn:jboss:domain:infinispan:4.0"> <cache-container name="server" aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server"> <transport lock-timeout="60000"/> <replicated-cache name="default" mode="SYNC"> <transaction mode="BATCH"/> </replicated-cache> </cache-container> <cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan"> <transport lock-timeout="60000"/> <distributed-cache name="dist" mode="ASYNC" l1-lifespan="0" owners="2"> <locking isolation="REPEATABLE_READ"/> <transaction mode="BATCH"/> <file-store/> </distributed-cache> <distributed-cache name="concurrent" mode="SYNC" l1-lifespan="0" owners="2"> <file-store/> </distributed-cache> </cache-container> <cache-container name="ejb" aliases="sfsb" default-cache="dist" module="org.wildfly.clustering.ejb.infinispan"> <transport lock-timeout="60000"/> <distributed-cache name="dist" mode="ASYNC" l1-lifespan="0" owners="2"> <locking isolation="REPEATABLE_READ"/> <transaction mode="BATCH"/> <file-store/> </distributed-cache> </cache-container> <cache-container name="hibernate" default-cache="local-query" module="org.hibernate.infinispan"> <transport lock-timeout="60000"/> <local-cache name="local-query"> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </local-cache> <invalidation-cache name="entity" mode="SYNC"> <transaction mode="NON_XA"/> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </invalidation-cache> <replicated-cache name="timestamps" mode="ASYNC"/> </cache-container> </subsystem> <subsystem xmlns="urn:jboss:domain:iiop-openjdk:1.0"> <orb socket-binding="iiop" ssl-socket-binding="iiop-ssl"/> <initializers security="identity" transactions="spec"/> </subsystem> <subsystem xmlns="urn:jboss:domain:jaxrs:1.0"/> <subsystem xmlns="urn:jboss:domain:jca:4.0"> <archive-validation enabled="true" fail-on-error="true" fail-on-warn="false"/> <bean-validation enabled="true"/> <default-workmanager> <short-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </short-running-threads> <long-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </long-running-threads> </default-workmanager> <cached-connection-manager/> </subsystem> <subsystem xmlns="urn:jboss:domain:jdr:1.0"/> <subsystem xmlns="urn:jboss:domain:jgroups:4.0"> <channels default="ee"> <channel name="ee" stack="udp"/> </channels> <stacks> <stack name="udp"> <transport type="UDP" socket-binding="jgroups-udp"/> <protocol type="PING"/> <protocol type="MERGE3"/> <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/> <protocol type="FD_ALL"/> <protocol type="VERIFY_SUSPECT"/> <protocol type="pbcast.NAKACK2"/> <protocol type="UNICAST3"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="UFC"/> <protocol type="MFC"/> <protocol type="FRAG2"/> </stack> <stack name="tcp"> <transport type="TCP" socket-binding="jgroups-tcp"/> <protocol type="MPING" socket-binding="jgroups-mping"/> <protocol type="MERGE3"/> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/> <protocol type="FD"/> <protocol type="VERIFY_SUSPECT"/> <protocol type="pbcast.NAKACK2"/> <protocol type="UNICAST3"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="MFC"/> <protocol type="FRAG2"/> </stack> </stacks> </subsystem> <subsystem xmlns="urn:jboss:domain:jmx:1.3"> <expose-resolved-model/> <expose-expression-model/> </subsystem> <subsystem xmlns="urn:jboss:domain:jpa:1.1"> <jpa default-datasource="" default-extended-persistence-inheritance="DEEP"/> </subsystem> <subsystem xmlns="urn:jboss:domain:jsf:1.0"/> <subsystem xmlns="urn:jboss:domain:jsr77:1.0"/> <subsystem xmlns="urn:jboss:domain:mail:2.0"> <mail-session name="default" jndi-name="java:jboss/mail/Default"> <smtp-server outbound-socket-binding-ref="mail-smtp"/> </mail-session> </subsystem> <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0"> <server name="default"> <cluster password="${jboss.messaging.cluster.password:@password@}"/> <bindings-directory/> <journal-directory/> <large-messages-directory/> <paging-directory/> <security-setting name="#"> <role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/> </security-setting> <address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10" redistribution-delay="1000"/> <http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/> <http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput"> <param name="batch-delay" value="50"/> </http-connector> <in-vm-connector name="in-vm" server-id="0"/> <http-acceptor name="http-acceptor" http-listener="default"/> <http-acceptor name="http-acceptor-throughput" http-listener="default"> <param name="batch-delay" value="50"/> <param name="direct-deliver" value="false"/> </http-acceptor> <in-vm-acceptor name="in-vm" server-id="0"/> <broadcast-group name="bg-group1" jgroups-channel="activemq-cluster" connectors="http-connector"/> <discovery-group name="dg-group1" jgroups-channel="activemq-cluster"/> <cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/> <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/> <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/> <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/> <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/> <pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/> </server> </subsystem> <subsystem xmlns="urn:jboss:domain:modcluster:2.0"> <mod-cluster-config advertise-socket="modcluster" balancer="dev-ha-server-group" connector="ajp"> <dynamic-load-provider> <load-metric type="busyness"/> </dynamic-load-provider> </mod-cluster-config> </subsystem> <subsystem xmlns="urn:jboss:domain:naming:2.0"> <remote-naming/> </subsystem> <subsystem xmlns="urn:jboss:domain:pojo:1.0"/> <subsystem xmlns="urn:jboss:domain:remoting:3.0"> <http-connector name="http-remoting-connector" connector-ref="default" security-realm="ApplicationRealm"/> </subsystem> <subsystem xmlns="urn:jboss:domain:resource-adapters:4.0"/> <subsystem xmlns="urn:jboss:domain:request-controller:1.0"/> <subsystem xmlns="urn:jboss:domain:sar:1.0"/> <subsystem xmlns="urn:jboss:domain:security:1.2"> <security-domains> <security-domain name="other" cache-type="default"> <authentication> <login-module code="Remoting" flag="optional"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> <login-module code="RealmDirect" flag="required"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> </authentication> </security-domain> <security-domain name="jboss-web-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> <security-domain name="jboss-ejb-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> <security-domain name="jaspitest" cache-type="default"> <authentication-jaspi> <login-module-stack name="dummy"> <login-module code="Dummy" flag="optional"/> </login-module-stack> <auth-module code="Dummy"/> </authentication-jaspi> </security-domain> </security-domains> </subsystem> <subsystem xmlns="urn:jboss:domain:security-manager:1.0"> <deployment-permissions> <maximum-set> <permission class="java.security.AllPermission"/> </maximum-set> </deployment-permissions> </subsystem> <subsystem xmlns="urn:jboss:domain:singleton:1.0"> <singleton-policies default="default"> <singleton-policy name="default" cache-container="server"> <simple-election-policy/> </singleton-policy> </singleton-policies> </subsystem> <subsystem xmlns="urn:jboss:domain:transactions:3.0"> <core-environment> <process-id> <uuid/> </process-id> </core-environment> <recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/> </subsystem> <subsystem xmlns="urn:jboss:domain:undertow:3.1"> <buffer-cache name="default"/> <server name="default-server"> <ajp-listener name="ajp" socket-binding="ajp" max-post-size="26214400"/> <http-listener name="default" socket-binding="http" max-post-size="26214400" redirect-socket="https" enable-http2="true"/> <https-listener name="https" socket-binding="https" max-post-size="26214400" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <filter-ref name="server-header"/> <filter-ref name="x-powered-by-header"/> <filter-ref name="gzipFilter" predicate="exists['%{o,Content-Type}'] and regex[pattern='(?:application/javascript|text/css|text/html|text/xml|application/json)(;.*)?', value=%{o,Content-Type}, full-match=true]"/> </host> </server> <servlet-container name="default"> <jsp-config trim-spaces="true"/> <websockets/> </servlet-container> <handlers> <file name="welcome-content" path="${jboss.home.dir}/welcome-content"/> </handlers> <filters> <response-header name="server-header" header-name="Server" header-value="WildFly/10"/> <response-header name="x-powered-by-header" header-name="X-Powered-By" header-value="Undertow/1"/> <gzip name="gzipFilter"/> </filters> </subsystem> <subsystem xmlns="urn:jboss:domain:webservices:2.0"> <wsdl-host>${jboss.bind.address:127.0.0.1}</wsdl-host> <endpoint-config name="Standard-Endpoint-Config"/> <endpoint-config name="Recording-Endpoint-Config"> <pre-handler-chain name="recording-handlers" protocol-bindings="##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM"> <handler name="RecordingHandler" class="org.jboss.ws.common.invocation.RecordingServerHandler"/> </pre-handler-chain> </endpoint-config> <client-config name="Standard-Client-Config"/> </subsystem> <subsystem xmlns="urn:jboss:domain:weld:3.0"/> </profile> This the socket binding group: <socket-binding-group name="dev-full-ha-sockets" default-interface="public"> <socket-binding name="ajp" port="${jboss.ajp.port:8009}"/> <socket-binding name="http" port="${jboss.http.port:8080}"/> <socket-binding name="https" port="${jboss.https.port:18443}"/> <socket-binding name="iiop" interface="unsecure" port="3528"/> <socket-binding name="iiop-ssl" interface="unsecure" port="3529"/> <socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/> <socket-binding name="jgroups-tcp" port="7600"/> <socket-binding name="jgroups-tcp-fd" port="57600"/> <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/> <socket-binding name="jgroups-udp-fd" port="54200"/> <socket-binding name="modcluster" port="0" multicast-address="224.0.1.105" multicast-port="23364"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group> And finally, the server-group: <server-group name="dev-ha-server-group" profile="ha-dev2"> <jvm name="default"> <heap size="256m" max-size="1024m"/> </jvm> <socket-binding-group ref="dev-full-ha-sockets"/> <deployments> <!-- Deployments redacted --> </deployments> </server-group> Using TCPDump, I can see traffic for modcluster from both of my servers. I have also run the McastReceiverTest (src: [URL] ) via: java org.jgroups.tests.McastReceiverTest -mcast_addr 230.0.0.4 -port 45700 and have used the following command: printf "GET / HTTP/1.0\r\n\r\n" | nc -vu 230.0.0.4 45700 and see the UDP traffic coming through. I am not seeing anything for the jgroups from wildfly though. In my logs, when I look for the subscribed channels, I see "Received new cluster view for channel server: [app-one:server-one|0] (1) [app-one:server-one]" But I never see my second server on either instance. When I test, I have an F5 load balancer in front of both RedHat instances. I have a few assumptions that I am hoping to clear up: Do I need to have Apache enabled for Modcluster? I'd assume not as it appears to be used for load balancing if you don't have a separate balancing mechanism. Is Session Replication the same as running a "hot-hot" scenario so that 2 applications can run and keep a session alive? My understanding is that I should be able to log into the application on server A and server B would be able to keep that session alive as well. We are trying to prevent the need for session stickyness. Can this run in a multi-server environment or is it only for 2 instances on the same local machine? I would assume that it should not matter if this is across a local machine or on 2 separate physicals/vms behind the same firewall. Thank you. I found a solution which has been eluding me. The problem with my configuration was in my bind IP. I was using 127.0.0.1 for my jboss.bind.address as I had Apache in front of my Wildfly configuration. I changed the bind IP to the IP of the NIC (in my case it was a 10.* address).
|
java, wildfly, redhat, wildfly-10, session-replication
| 2
| 4,321
| 2
|
https://stackoverflow.com/questions/41940438/session-replication-in-wildfly-10-1
|
33,652,294
|
how install rvm on RHL7 using centos repo
|
How to install rvm(ruby) on RHL7 using centos repo. I know if we are using centos repository we should be using centos OS and not RedHat, but we have a proprietary software that require Redhat. when I try to install ruby 1.93 using rvm I got this: rvm install 1.9.3 Searching for binary rubies, this might take some time. No binary rubies available for: redhat/6/x86_64/ruby-1.9.3-p551. Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies. Checking requirements for redhat. Unable to locate SystemId file. Is this system registered? Our client does not have registered system with redhat, So I did configure centos repository. But how can I tell RVM to use this centos repository?
|
how install rvm on RHL7 using centos repo How to install rvm(ruby) on RHL7 using centos repo. I know if we are using centos repository we should be using centos OS and not RedHat, but we have a proprietary software that require Redhat. when I try to install ruby 1.93 using rvm I got this: rvm install 1.9.3 Searching for binary rubies, this might take some time. No binary rubies available for: redhat/6/x86_64/ruby-1.9.3-p551. Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies. Checking requirements for redhat. Unable to locate SystemId file. Is this system registered? Our client does not have registered system with redhat, So I did configure centos repository. But how can I tell RVM to use this centos repository?
|
centos, rvm, redhat
| 2
| 1,129
| 2
|
https://stackoverflow.com/questions/33652294/how-install-rvm-on-rhl7-using-centos-repo
|
32,352,540
|
Hostname resolution fails when running docker build from a docker container
|
We are running a Jenkins CI server from a docker container, started with docker-compose. The Jenkins server is running some jobs which are pulling projects from git and building docker containers the standard way executing docker build . on them. To be able to use docker inside the docker container we are mounting over /var/run/docker.sock with docker-compose to the Jenkins container. Some of the Dockerfile-s we are trying to build there are downloading files from our fileserver (3rd party installation images for example). Such a Dockerfile command looks like RUN curl -o xx.zip [URL] . The fileserver hostname gets resolved through the /etc/hosts file and it resolves to the host's public IP which runs the Jenkins CI server. The docker-compose config for the Jenkins container also includes the extra_hosts parameter pointing the fileserver to the host's public IP. The problem is that building the docker container with Jenkins running in it's own container fails with a plain Unknown host: fileserver message. If I enter the Jenkins container via docker exec -it <id> , I can execute the same curl command and it resolves the host, but if I try to run docker build . there which tries to run the same curl command, it fails to resolve the host. Our host is an RHEL and I failed to reproduce the problem on my desktop Arch Linux so I suspect it's something redhat-specific issue (again).
|
Hostname resolution fails when running docker build from a docker container We are running a Jenkins CI server from a docker container, started with docker-compose. The Jenkins server is running some jobs which are pulling projects from git and building docker containers the standard way executing docker build . on them. To be able to use docker inside the docker container we are mounting over /var/run/docker.sock with docker-compose to the Jenkins container. Some of the Dockerfile-s we are trying to build there are downloading files from our fileserver (3rd party installation images for example). Such a Dockerfile command looks like RUN curl -o xx.zip [URL] . The fileserver hostname gets resolved through the /etc/hosts file and it resolves to the host's public IP which runs the Jenkins CI server. The docker-compose config for the Jenkins container also includes the extra_hosts parameter pointing the fileserver to the host's public IP. The problem is that building the docker container with Jenkins running in it's own container fails with a plain Unknown host: fileserver message. If I enter the Jenkins container via docker exec -it <id> , I can execute the same curl command and it resolves the host, but if I try to run docker build . there which tries to run the same curl command, it fails to resolve the host. Our host is an RHEL and I failed to reproduce the problem on my desktop Arch Linux so I suspect it's something redhat-specific issue (again).
|
network-programming, docker, redhat, rhel, docker-compose
| 2
| 2,743
| 2
|
https://stackoverflow.com/questions/32352540/hostname-resolution-fails-when-running-docker-build-from-a-docker-container
|
30,550,550
|
Where does a C program write output after I did "rm" on the output file?
|
I ran a rather nasty C program on a computing cluster running Redhat that got into an infinite loop, within each of which it printed a line of output. When I realized it was quickly creating a file that would eventually use up all the disk space, I ran "rm" on that output file before I killed the program. Unfortunately, per "df -h" space continued to get used up on the drive before I finally killed the program. I now can't find the file that was written, so I'm unable to delete it. Where would such a file be written to?
|
Where does a C program write output after I did "rm" on the output file? I ran a rather nasty C program on a computing cluster running Redhat that got into an infinite loop, within each of which it printed a line of output. When I realized it was quickly creating a file that would eventually use up all the disk space, I ran "rm" on that output file before I killed the program. Unfortunately, per "df -h" space continued to get used up on the drive before I finally killed the program. I now can't find the file that was written, so I'm unable to delete it. Where would such a file be written to?
|
c, linux, redhat
| 2
| 90
| 3
|
https://stackoverflow.com/questions/30550550/where-does-a-c-program-write-output-after-i-did-rm-on-the-output-file
|
28,021,063
|
Copy Folders and its files with specific date in Linux
|
Is there a way to copy folders and its contents with specific modification date in Linux, for example I have this folder A and B it contains files with modified date 2015-01-01 to 2015-01-18 , is it possible to copy folder A and B containing only files modified on 2015-01-01 to 2015-01-08 ? I did some research and came up with this find ./ -type d -exec mkdir -p {} $target{} \; find $source -mtime +30 -exec cp -p "{}" $target \; but after executing the 2nd line, files will be copied to the root directory on target location, not in the same structure as the source for example i have this source directory to be copied on the target /storage/subdir1/* (modified date range - 2015-01-01 to 2015-01-18) /storage/subdir2/* (modified date range - 2015-01-01 to 2015-01-18) /storage/subdir3/* (modified date range - 2015-01-01 to 2015-01-18) /storage/subdir4/* (modified date range - 2015-01-01 to 2015-01-18) would it be possible that in the target directory (/targetdir/) all sub directories will be created automatically and it contains only files with modified date (2015-01-01 to 2015-01-08) John
|
Copy Folders and its files with specific date in Linux Is there a way to copy folders and its contents with specific modification date in Linux, for example I have this folder A and B it contains files with modified date 2015-01-01 to 2015-01-18 , is it possible to copy folder A and B containing only files modified on 2015-01-01 to 2015-01-08 ? I did some research and came up with this find ./ -type d -exec mkdir -p {} $target{} \; find $source -mtime +30 -exec cp -p "{}" $target \; but after executing the 2nd line, files will be copied to the root directory on target location, not in the same structure as the source for example i have this source directory to be copied on the target /storage/subdir1/* (modified date range - 2015-01-01 to 2015-01-18) /storage/subdir2/* (modified date range - 2015-01-01 to 2015-01-18) /storage/subdir3/* (modified date range - 2015-01-01 to 2015-01-18) /storage/subdir4/* (modified date range - 2015-01-01 to 2015-01-18) would it be possible that in the target directory (/targetdir/) all sub directories will be created automatically and it contains only files with modified date (2015-01-01 to 2015-01-08) John
|
linux, bash, redhat
| 2
| 13,865
| 2
|
https://stackoverflow.com/questions/28021063/copy-folders-and-its-files-with-specific-date-in-linux
|
27,935,594
|
update Anaconda packages in air-gapped environment
|
For regulatory reasons my company has deployed an air-gapped Red Hat environment with, among other, Python Anaconda and R installed. How to I go about updating Anaconda packages in such an environment? I can move files from my own machine to the environment via FTP but cannot access the internet directly from the air-gapped environment. I usually update my anaconda packages with something like this: conda update scipy
|
update Anaconda packages in air-gapped environment For regulatory reasons my company has deployed an air-gapped Red Hat environment with, among other, Python Anaconda and R installed. How to I go about updating Anaconda packages in such an environment? I can move files from my own machine to the environment via FTP but cannot access the internet directly from the air-gapped environment. I usually update my anaconda packages with something like this: conda update scipy
|
python-2.7, updates, redhat, anaconda
| 2
| 1,582
| 1
|
https://stackoverflow.com/questions/27935594/update-anaconda-packages-in-air-gapped-environment
|
26,288,057
|
389-ds ldap - remove user from group
|
We have 389-ds directory with many users in a particular group. Does anyone know how I can delete a user from a group called ' clients ' using ldapmodify or ldapdelete command line tools? Thank You
|
389-ds ldap - remove user from group We have 389-ds directory with many users in a particular group. Does anyone know how I can delete a user from a group called ' clients ' using ldapmodify or ldapdelete command line tools? Thank You
|
ldap, redhat, ldap-query
| 2
| 7,015
| 1
|
https://stackoverflow.com/questions/26288057/389-ds-ldap-remove-user-from-group
|
25,324,308
|
java program reports wrong timezone on linux(shows IST for Europe/Dublin)
|
I get a wrong Time zone value (IST) from the following code. Its from bugreport import java.util.*; import java.text.*; class simpleTest { public static void main(String args[]) { System.out.println("Simple test Josh "); Date now = new Date(); DateFormat df = DateFormat.getDateInstance(); Calendar cal = Calendar.getInstance(); System.out.println("\n TIME ZONE :"+ cal.getTimeZone().getDisplayName()); long nowLong = now.getTime(); String s = now.toString(); System.out.println("Value of milliseconds since Epoch is " + nowLong); System.out.println("Value of s in readable format is " + s); } } With Dublin, the zone is wrong. It shows IST $ java -Duser.timezone=Europe/Dublin simpleTest Simple test Josh TIME ZONE :Greenwich Mean Time Value of milliseconds since Epoch is 1408095007238 Value of s in readable format is Fri Aug 15 10:30:07 IST 2014 This one is okay $ java -Duser.timezone=Europe/Helsinki simpleTest Simple test Josh TIME ZONE :Eastern European Time Value of milliseconds since Epoch is 1408095025866 Value of s in readable format is Fri Aug 15 12:30:25 EEST 2014 Where does the value IST come from? I have checked os files like /etc/localtime bash-3.2# cd /etc bash-3.2# ls -lrt localtime lrwxrwxrwx 1 root root 33 Nov 16 2010 localtime -> /usr/share/zoneinfo/Europe/Dublin /etc/sysconfig/clock bash-3.2# cd /etc/sysconfig/ bash-3.2# cat clock # The ZONE parameter is only evaluated by system-config-date. # The timezone of the system is defined by the contents of /etc/localtime. ZONE="Europe/Dublin" UTC=true ARC=false bash-3.2# pwd
|
java program reports wrong timezone on linux(shows IST for Europe/Dublin) I get a wrong Time zone value (IST) from the following code. Its from bugreport import java.util.*; import java.text.*; class simpleTest { public static void main(String args[]) { System.out.println("Simple test Josh "); Date now = new Date(); DateFormat df = DateFormat.getDateInstance(); Calendar cal = Calendar.getInstance(); System.out.println("\n TIME ZONE :"+ cal.getTimeZone().getDisplayName()); long nowLong = now.getTime(); String s = now.toString(); System.out.println("Value of milliseconds since Epoch is " + nowLong); System.out.println("Value of s in readable format is " + s); } } With Dublin, the zone is wrong. It shows IST $ java -Duser.timezone=Europe/Dublin simpleTest Simple test Josh TIME ZONE :Greenwich Mean Time Value of milliseconds since Epoch is 1408095007238 Value of s in readable format is Fri Aug 15 10:30:07 IST 2014 This one is okay $ java -Duser.timezone=Europe/Helsinki simpleTest Simple test Josh TIME ZONE :Eastern European Time Value of milliseconds since Epoch is 1408095025866 Value of s in readable format is Fri Aug 15 12:30:25 EEST 2014 Where does the value IST come from? I have checked os files like /etc/localtime bash-3.2# cd /etc bash-3.2# ls -lrt localtime lrwxrwxrwx 1 root root 33 Nov 16 2010 localtime -> /usr/share/zoneinfo/Europe/Dublin /etc/sysconfig/clock bash-3.2# cd /etc/sysconfig/ bash-3.2# cat clock # The ZONE parameter is only evaluated by system-config-date. # The timezone of the system is defined by the contents of /etc/localtime. ZONE="Europe/Dublin" UTC=true ARC=false bash-3.2# pwd
|
java, linux, redhat
| 2
| 1,060
| 1
|
https://stackoverflow.com/questions/25324308/java-program-reports-wrong-timezone-on-linuxshows-ist-for-europe-dublin
|
9,446,275
|
Best approach to integrate netty with openshift
|
In fact, I'm trying to see which would be the best approach to achieve play framework native support on openshift. Play has it's own http server developed with netty. Right now you can deploy a play application to openshift, but you have to deploy it as a war, in which case play uses Servlet Container wrapper. Being able to deploy it as a netty application would allow us to use some advanced features, like asynchronuos request. Openshift uses jboss, so this question would also involve which would be the recommended approach to deploy a netty application on a jboss server, using netty instead of the servlet container provided by jboss. Here is request for providing play framework native support on openshift There's more info there, and if you like it you can also add your vote ;-)
|
Best approach to integrate netty with openshift In fact, I'm trying to see which would be the best approach to achieve play framework native support on openshift. Play has it's own http server developed with netty. Right now you can deploy a play application to openshift, but you have to deploy it as a war, in which case play uses Servlet Container wrapper. Being able to deploy it as a netty application would allow us to use some advanced features, like asynchronuos request. Openshift uses jboss, so this question would also involve which would be the recommended approach to deploy a netty application on a jboss server, using netty instead of the servlet container provided by jboss. Here is request for providing play framework native support on openshift There's more info there, and if you like it you can also add your vote ;-)
|
jboss, playframework, redhat, netty, openshift
| 2
| 1,244
| 1
|
https://stackoverflow.com/questions/9446275/best-approach-to-integrate-netty-with-openshift
|
8,539,510
|
Advance Programming in the Unix Environment: Get Configuration Limits Figure 2.12 "Build C Program to print all supported configuration limits"
|
The author claims that his awk script will print out all the limits for a POSIX.1 and XSI compliant system. I am using Red Hat Enterprise Linux Server release 6.0 (Santiago). When I run his awk script it does not seem to be printing out the #ifdef portion of the C program. My thoughts are that sysconf.sym do not exist on this distribution, therefore the while loops never run. Could someone please confirm that? If this is the case what changes would I need to make to the awk script to get it to print out the #ifdef portion of the code? The awk script is: # Run with awk -f <awk_script> BEGIN { printf("#include \"apue.h\"\n") printf("#include <errno.h>\n") printf("#include <limits.h>\n") printf("#include <stdio.h>\n") printf("\n") printf("int log_to_stderr = 0;\n") printf("static void pr_sysconf(char *, int);\n") printf("static void pr_pathconf(char *, char *, int);\n") printf("\n") printf("int\n") printf("main(int argc, char *argv[])\n") printf("{\n") printf(" if (argc != 2)\n") printf(" err_quit(\"usage: a.out <dirname>\");\n\n") FS="\t+" while (getline <"sysconf.sym" > 0) { printf("#ifdef %s\n", $1) printf(" printf(\"%s defined to be %%d\\n\", %s+0);\n", $1, $1) printf("#else\n") printf(" printf(\"no symbol for %s\\n\");\n", $1) printf("#endif\n") printf("#ifdef %s\n", $2) printf(" pr_sysconf(\"%s =\", %s);\n", $1, $2) printf("#else\n") printf(" printf(\"no symbol for %s\\n\");\n", $2) printf("#endif\n") } close("sysconf.sym") while (getline <"pathconf.sym" > 0) { printf("#ifdef %s\n", $1) printf(" printf(\"%s defined to be %%d\\n\", %s+0);\n", $1, $1) printf("#else\n") printf(" printf(\"no symbol for %s\\n\");\n", $1) printf("#endif\n") printf("#ifdef %s\n", $2) printf(" pr_pathconf(\"%s =\", argv[1], %s);\n", $1, $2) printf("#else\n") printf(" printf(\"no symbol for %s\\n\");\n", $2) printf("#endif\n") } close("pathconf.sym") exit } END { printf(" exit(0);\n") printf("}\n\n") printf("static void\n") printf("pr_sysconf(char *mesg, int name)\n") printf("{\n") printf(" long val;\n\n") printf(" fputs(mesg, stdout);\n") printf(" errno = 0;\n") printf(" if ((val = sysconf(name)) < 0) {\n") printf(" if (errno != 0) {\n") printf(" if (errno == EINVAL)\n") printf(" fputs(\" (not supported)\\n\", stdout);\n") printf(" else\n") printf(" err_sys(\"sysconf error\");\n") printf(" } else {\n") printf(" fputs(\" (no limit)\\n\", stdout);\n") printf(" }\n") printf(" } else {\n") printf(" printf(\" %%ld\\n\", val);\n") printf(" }\n") printf("}\n\n") printf("static void\n") printf("pr_pathconf(char *mesg, char *path, int name)\n") printf("{\n") printf(" long val;\n") printf("\n") printf(" fputs(mesg, stdout);\n") printf(" errno = 0;\n") printf(" if ((val = pathconf(path, name)) < 0) {\n") printf(" if (errno != 0) {\n") printf(" if (errno == EINVAL)\n") printf(" fputs(\" (not supported)\\n\", stdout);\n") printf(" else\n") printf(" err_sys(\"pathconf error, path = %%s\", path);\n") printf(" } else {\n") printf(" fputs(\" (no limit)\\n\", stdout);\n") printf(" }\n") printf(" } else {\n") printf(" printf(\" %%ld\\n\", val);\n") printf(" }\n") printf("}\n") } Update If you would like the apue.h header so you can compile the C program that can be found at. apue.h
|
Advance Programming in the Unix Environment: Get Configuration Limits Figure 2.12 "Build C Program to print all supported configuration limits" The author claims that his awk script will print out all the limits for a POSIX.1 and XSI compliant system. I am using Red Hat Enterprise Linux Server release 6.0 (Santiago). When I run his awk script it does not seem to be printing out the #ifdef portion of the C program. My thoughts are that sysconf.sym do not exist on this distribution, therefore the while loops never run. Could someone please confirm that? If this is the case what changes would I need to make to the awk script to get it to print out the #ifdef portion of the code? The awk script is: # Run with awk -f <awk_script> BEGIN { printf("#include \"apue.h\"\n") printf("#include <errno.h>\n") printf("#include <limits.h>\n") printf("#include <stdio.h>\n") printf("\n") printf("int log_to_stderr = 0;\n") printf("static void pr_sysconf(char *, int);\n") printf("static void pr_pathconf(char *, char *, int);\n") printf("\n") printf("int\n") printf("main(int argc, char *argv[])\n") printf("{\n") printf(" if (argc != 2)\n") printf(" err_quit(\"usage: a.out <dirname>\");\n\n") FS="\t+" while (getline <"sysconf.sym" > 0) { printf("#ifdef %s\n", $1) printf(" printf(\"%s defined to be %%d\\n\", %s+0);\n", $1, $1) printf("#else\n") printf(" printf(\"no symbol for %s\\n\");\n", $1) printf("#endif\n") printf("#ifdef %s\n", $2) printf(" pr_sysconf(\"%s =\", %s);\n", $1, $2) printf("#else\n") printf(" printf(\"no symbol for %s\\n\");\n", $2) printf("#endif\n") } close("sysconf.sym") while (getline <"pathconf.sym" > 0) { printf("#ifdef %s\n", $1) printf(" printf(\"%s defined to be %%d\\n\", %s+0);\n", $1, $1) printf("#else\n") printf(" printf(\"no symbol for %s\\n\");\n", $1) printf("#endif\n") printf("#ifdef %s\n", $2) printf(" pr_pathconf(\"%s =\", argv[1], %s);\n", $1, $2) printf("#else\n") printf(" printf(\"no symbol for %s\\n\");\n", $2) printf("#endif\n") } close("pathconf.sym") exit } END { printf(" exit(0);\n") printf("}\n\n") printf("static void\n") printf("pr_sysconf(char *mesg, int name)\n") printf("{\n") printf(" long val;\n\n") printf(" fputs(mesg, stdout);\n") printf(" errno = 0;\n") printf(" if ((val = sysconf(name)) < 0) {\n") printf(" if (errno != 0) {\n") printf(" if (errno == EINVAL)\n") printf(" fputs(\" (not supported)\\n\", stdout);\n") printf(" else\n") printf(" err_sys(\"sysconf error\");\n") printf(" } else {\n") printf(" fputs(\" (no limit)\\n\", stdout);\n") printf(" }\n") printf(" } else {\n") printf(" printf(\" %%ld\\n\", val);\n") printf(" }\n") printf("}\n\n") printf("static void\n") printf("pr_pathconf(char *mesg, char *path, int name)\n") printf("{\n") printf(" long val;\n") printf("\n") printf(" fputs(mesg, stdout);\n") printf(" errno = 0;\n") printf(" if ((val = pathconf(path, name)) < 0) {\n") printf(" if (errno != 0) {\n") printf(" if (errno == EINVAL)\n") printf(" fputs(\" (not supported)\\n\", stdout);\n") printf(" else\n") printf(" err_sys(\"pathconf error, path = %%s\", path);\n") printf(" } else {\n") printf(" fputs(\" (no limit)\\n\", stdout);\n") printf(" }\n") printf(" } else {\n") printf(" printf(\" %%ld\\n\", val);\n") printf(" }\n") printf("}\n") } Update If you would like the apue.h header so you can compile the C program that can be found at. apue.h
|
c, linux, unix, awk, redhat
| 2
| 600
| 3
|
https://stackoverflow.com/questions/8539510/advance-programming-in-the-unix-environment-get-configuration-limits-figure-2-1
|
4,959,872
|
Py_InitModule4 with Djapian/Xapian
|
I am trying to install Djapian on RedHat5 / Python2.6. I have already installed it successfully on my OSX 10.6 machine. I have built and compiled Xapian and Djapian without issue for Py2.6. I then install the Python Bindings for Xapian and it works fine, however, if open the Python interpreter and type 'import xapian, or try including djapian in my Django app, I get the following error: /usr/lib64/python2.6/site-packages/_xapian.so: undefined symbol: Py_InitModule4 In searching, I have seen this issue for several Modules not just Xapian, but i can't seem to find a good solution. I do have python-devel installed. I am guessing the issue is on the Python side and not Xapian.
|
Py_InitModule4 with Djapian/Xapian I am trying to install Djapian on RedHat5 / Python2.6. I have already installed it successfully on my OSX 10.6 machine. I have built and compiled Xapian and Djapian without issue for Py2.6. I then install the Python Bindings for Xapian and it works fine, however, if open the Python interpreter and type 'import xapian, or try including djapian in my Django app, I get the following error: /usr/lib64/python2.6/site-packages/_xapian.so: undefined symbol: Py_InitModule4 In searching, I have seen this issue for several Modules not just Xapian, but i can't seem to find a good solution. I do have python-devel installed. I am guessing the issue is on the Python side and not Xapian.
|
python, redhat, xapian
| 2
| 3,177
| 1
|
https://stackoverflow.com/questions/4959872/py-initmodule4-with-djapian-xapian
|
2,379,634
|
binary read/write runtime failure
|
I've looked at binary reading and writing objects in c++ but are having some problems. It "works" but in addition I get a huge output of errors/"info". What I've done is Person p2; std::fstream file; file.open( filename.c_str(), std::ios::in | std::ios::out | std::ios::binary ); file.seekg(0, std::ios::beg ); file.read ( (char*)&p2, sizeof(p2)); file.close(); std::cout << "Name: " << p2.name; Person is a simple struct containing string name and int age . When I run the program it outputs "Name: Bob" since I have already made a program to write to a file (so the object is already in filename). IN ADDITION to outputting the name it also outputs: * glibc detected * program: double free og corruption (fastttop): *** Backtrace: ... Memory map: ... Abort
|
binary read/write runtime failure I've looked at binary reading and writing objects in c++ but are having some problems. It "works" but in addition I get a huge output of errors/"info". What I've done is Person p2; std::fstream file; file.open( filename.c_str(), std::ios::in | std::ios::out | std::ios::binary ); file.seekg(0, std::ios::beg ); file.read ( (char*)&p2, sizeof(p2)); file.close(); std::cout << "Name: " << p2.name; Person is a simple struct containing string name and int age . When I run the program it outputs "Name: Bob" since I have already made a program to write to a file (so the object is already in filename). IN ADDITION to outputting the name it also outputs: * glibc detected * program: double free og corruption (fastttop): *** Backtrace: ... Memory map: ... Abort
|
c++, gcc, file-io, binary, redhat
| 2
| 1,008
| 5
|
https://stackoverflow.com/questions/2379634/binary-read-write-runtime-failure
|
2,176,227
|
JBoss training and certification from Red Hat?
|
Has anyone taken JBoss Application Administration course or any other courses from Red Hat? How much the emphasis is on Enterprise version over Community edition? How about JBoss Certified Applications Administrator (JBCAA) test? Easy to pass without course?
|
JBoss training and certification from Red Hat? Has anyone taken JBoss Application Administration course or any other courses from Red Hat? How much the emphasis is on Enterprise version over Community edition? How about JBoss Certified Applications Administrator (JBCAA) test? Easy to pass without course?
|
jboss, certificate, redhat
| 2
| 3,511
| 2
|
https://stackoverflow.com/questions/2176227/jboss-training-and-certification-from-red-hat
|
77,000,590
|
Referencing std::pow() requires GLIBC2.29
|
I'm building on Ubuntu 20.04 and my program executed on RedHat 8 just fine until I included <cmath> and used std::pow(double, double) . Now I get the following error on RedHat 8: /lib64/libm.so.6: version `GLIBC_2.29' not found (required by /MyOwnLib.so) What is so special about std::pow that it requires GLIBC 2.29? This function is very old. Can I somehow force the compiler on Ubuntu to "link" an older version?
|
Referencing std::pow() requires GLIBC2.29 I'm building on Ubuntu 20.04 and my program executed on RedHat 8 just fine until I included <cmath> and used std::pow(double, double) . Now I get the following error on RedHat 8: /lib64/libm.so.6: version `GLIBC_2.29' not found (required by /MyOwnLib.so) What is so special about std::pow that it requires GLIBC 2.29? This function is very old. Can I somehow force the compiler on Ubuntu to "link" an older version?
|
c++, redhat, cmath
| 2
| 2,200
| 3
|
https://stackoverflow.com/questions/77000590/referencing-stdpow-requires-glibc2-29
|
74,290,238
|
VScode golang debug error __debug_bin: Permission Denied
|
I am using a container [URL] with SSH as a Golang development environment. When I have to debug a unit test I am receiving a permission error: I've done some searches and found tips to add *__ debug_bin on .gitignore and check the Selinux permissions. Selinux is not present in this image. The /Home directory permissions in the container is 776 and I am logged in SSH as a root user. What else can I try?
|
VScode golang debug error __debug_bin: Permission Denied I am using a container [URL] with SSH as a Golang development environment. When I have to debug a unit test I am receiving a permission error: I've done some searches and found tips to add *__ debug_bin on .gitignore and check the Selinux permissions. Selinux is not present in this image. The /Home directory permissions in the container is 776 and I am logged in SSH as a root user. What else can I try?
|
go, visual-studio-code, redhat, vscode-debugger
| 2
| 835
| 1
|
https://stackoverflow.com/questions/74290238/vscode-golang-debug-error-debug-bin-permission-denied
|
72,168,227
|
VSCode: Custom YAML formatting rules
|
Is there a way to configure the formatting rules of the Red Hat YAML formatter in VSCode? More specifically, I would like to configure the formatting to not to indent the lists like this: list: - 1 2 3 Instead, I would like this: list: - 1 2 3 Alternatively, are there any other worthwhile extensions which support this? Thanks.
|
VSCode: Custom YAML formatting rules Is there a way to configure the formatting rules of the Red Hat YAML formatter in VSCode? More specifically, I would like to configure the formatting to not to indent the lists like this: list: - 1 2 3 Instead, I would like this: list: - 1 2 3 Alternatively, are there any other worthwhile extensions which support this? Thanks.
|
visual-studio-code, yaml, redhat
| 2
| 3,645
| 1
|
https://stackoverflow.com/questions/72168227/vscode-custom-yaml-formatting-rules
|
63,469,206
|
RedHat - how RPM may automatically select correct JVM version?
|
I distributing my own RPM package, which contains of jar file. I targeting RHEL 8. By default, on RHEL 8 installed Java 8. My jar requires Java 11. In order to bring it and install "automatically" in case missing, inside my RPM "spec" I added dependency on Java 11 like this: Requires: java-11-openjdk-headless ...and indeed Java 11 package downloaded and installed together with mine. In order to execute my jar I running the following command: java -jar <my.jar> However, seems like Java 8 is the one which being selected and my application fails to run properly. If I using "alternatives" and selecting Java 11 - everything works fine. But I want to provide my customers a "self-contained" RPM package, without need to perform additional manual steps. I don't want them to select the correct Java version, I want this to happen somehow by itself. Is it possible somehow to automatically select correct Java version when my jar being executed?
|
RedHat - how RPM may automatically select correct JVM version? I distributing my own RPM package, which contains of jar file. I targeting RHEL 8. By default, on RHEL 8 installed Java 8. My jar requires Java 11. In order to bring it and install "automatically" in case missing, inside my RPM "spec" I added dependency on Java 11 like this: Requires: java-11-openjdk-headless ...and indeed Java 11 package downloaded and installed together with mine. In order to execute my jar I running the following command: java -jar <my.jar> However, seems like Java 8 is the one which being selected and my application fails to run properly. If I using "alternatives" and selecting Java 11 - everything works fine. But I want to provide my customers a "self-contained" RPM package, without need to perform additional manual steps. I don't want them to select the correct Java version, I want this to happen somehow by itself. Is it possible somehow to automatically select correct Java version when my jar being executed?
|
java, redhat, rpm
| 2
| 407
| 1
|
https://stackoverflow.com/questions/63469206/redhat-how-rpm-may-automatically-select-correct-jvm-version
|
63,397,665
|
How to install nodejs on redhat without internet and without root permission?
|
What is the best way that Node.js can be installed on linux without internet nor root permissions. So far I just downloaded the source tar.gz and tar.xz files.
|
How to install nodejs on redhat without internet and without root permission? What is the best way that Node.js can be installed on linux without internet nor root permissions. So far I just downloaded the source tar.gz and tar.xz files.
|
node.js, linux, redhat
| 2
| 9,021
| 2
|
https://stackoverflow.com/questions/63397665/how-to-install-nodejs-on-redhat-without-internet-and-without-root-permission
|
62,379,413
|
chmod recursive, but exclude starting directory
|
From what I can tell, I've run into a limitation of chmod - hoping to pick the more experienced brains here before resorting to writing some find scripts. I would like to chmod -R all files & directories within a folder, but leave the folder itself alone. I need to avoid the permissions of the starting directory changing, at all during this process, so a simple chmod -R followed by a non recursive chmod to reset the permissions on the starting directory isn't an option. Any ideas?
|
chmod recursive, but exclude starting directory From what I can tell, I've run into a limitation of chmod - hoping to pick the more experienced brains here before resorting to writing some find scripts. I would like to chmod -R all files & directories within a folder, but leave the folder itself alone. I need to avoid the permissions of the starting directory changing, at all during this process, so a simple chmod -R followed by a non recursive chmod to reset the permissions on the starting directory isn't an option. Any ideas?
|
linux, debian, redhat, file-permissions, chmod
| 2
| 1,264
| 1
|
https://stackoverflow.com/questions/62379413/chmod-recursive-but-exclude-starting-directory
|
61,520,591
|
In Linux(RedHat) , C function malloc_stats() shows different values compared to /proc/<pid>/stat resident memory size
|
e.g. For a process running in Redhat linux as per /proc/{pid}/stat's resident pages * page size => 30 GB as per malloc_stats() => 2.5 GB any idea why this happens ? Arena 0: system bytes = 465162240 in use bytes = 465037200 Arena 1: system bytes = 1003520 in use bytes = 980656 Arena 2: system bytes = 8065024 in use bytes = 7771728 Arena 3: system bytes = 2278395904 in use bytes = 2276584320 Arena 4: system bytes = 1482752 in use bytes = 1236112 Arena 5: system bytes = 1482752 in use bytes = 1235440 Arena 6: system bytes = 1482752 in use bytes = 1240512 Total (incl. mmap): system bytes = 2782,699,520 in use bytes = 2779710544 max mmap regions = 9 max mmap bytes = 25624576
|
In Linux(RedHat) , C function malloc_stats() shows different values compared to /proc/<pid>/stat resident memory size e.g. For a process running in Redhat linux as per /proc/{pid}/stat's resident pages * page size => 30 GB as per malloc_stats() => 2.5 GB any idea why this happens ? Arena 0: system bytes = 465162240 in use bytes = 465037200 Arena 1: system bytes = 1003520 in use bytes = 980656 Arena 2: system bytes = 8065024 in use bytes = 7771728 Arena 3: system bytes = 2278395904 in use bytes = 2276584320 Arena 4: system bytes = 1482752 in use bytes = 1236112 Arena 5: system bytes = 1482752 in use bytes = 1235440 Arena 6: system bytes = 1482752 in use bytes = 1240512 Total (incl. mmap): system bytes = 2782,699,520 in use bytes = 2779710544 max mmap regions = 9 max mmap bytes = 25624576
|
c, linux, malloc, redhat, glibc
| 2
| 225
| 1
|
https://stackoverflow.com/questions/61520591/in-linuxredhat-c-function-malloc-stats-shows-different-values-compared-to
|
50,936,193
|
How to change RPM's destination when using rpmbuild
|
I am building an RPM with rpmbuild from a spec file in a directory foo but whenever I run rpmbuild it creates the rpm in my home directory at ~/rpmbuild/RPMS . How can I change the destination of the RPM and get the whole rpmbuild directory to be in foo ? I tried setting Buildroot to foo but that just changed where the build built temporary files, not the final rpm. Any help would be appreciated. Thanks!
|
How to change RPM's destination when using rpmbuild I am building an RPM with rpmbuild from a spec file in a directory foo but whenever I run rpmbuild it creates the rpm in my home directory at ~/rpmbuild/RPMS . How can I change the destination of the RPM and get the whole rpmbuild directory to be in foo ? I tried setting Buildroot to foo but that just changed where the build built temporary files, not the final rpm. Any help would be appreciated. Thanks!
|
centos, redhat, rpm, rpmbuild, rpm-spec
| 2
| 1,739
| 2
|
https://stackoverflow.com/questions/50936193/how-to-change-rpms-destination-when-using-rpmbuild
|
48,823,067
|
What are the RedHat Minishift hardware requirements?
|
As much as I've looked, I can't find the hardware requirements for running minishift. Nothing mentioned in the Container Development Kit documentation, and the OpenShift documentation only mentions hardware requirements for production deployments. I've been following RedHat's advice on running their Container Development Kit with nested KVM. [URL] I may be pushing the limits. On a MacBook Air with 4x1.7GHz & 8GB RAM I’m running Fedora 27. Gave 6GB RAM & 2 cores to the RHEL Server and starting Minishift saw that it was giving 2 cores and 4GB RAM to VM. It took about 30 minutes to download and extract the 4 docker images. Things got progressively worse from there. I’m trialing OpenShift Online. Would I run into a world of pain using Minishift directly on Fedora?
|
What are the RedHat Minishift hardware requirements? As much as I've looked, I can't find the hardware requirements for running minishift. Nothing mentioned in the Container Development Kit documentation, and the OpenShift documentation only mentions hardware requirements for production deployments. I've been following RedHat's advice on running their Container Development Kit with nested KVM. [URL] I may be pushing the limits. On a MacBook Air with 4x1.7GHz & 8GB RAM I’m running Fedora 27. Gave 6GB RAM & 2 cores to the RHEL Server and starting Minishift saw that it was giving 2 cores and 4GB RAM to VM. It took about 30 minutes to download and extract the 4 docker images. Things got progressively worse from there. I’m trialing OpenShift Online. Would I run into a world of pain using Minishift directly on Fedora?
|
redhat, minishift, redhat-containers
| 2
| 4,695
| 1
|
https://stackoverflow.com/questions/48823067/what-are-the-redhat-minishift-hardware-requirements
|
47,579,041
|
Installing docker on redhat linux - issue with 'container-selinux' and 'selinux-policy'
|
I have Linux on EC2 and trying to install Docker. How to resolve issue with 'container-selinux' and 'selinux-policy'? lsb_release -d Description: Red Hat Enterprise Linux Server release 6.9 (Santiago) sudo rpm -i container-selinux-2.9-4.el7.noarch.rpm warning: container-selinux-2.9-4.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY error: Failed dependencies: selinux-policy >= 3.13.1-39 is needed by container-selinux-2:2.9-4.el7.noarch selinux-policy-base >= 3.13.1-39 is needed by container-selinux-2:2.9-4.el7.noarch selinux-policy-targeted >= 3.13.1-39 is needed by container-selinux-2:2.9-4.el7.noarch
|
Installing docker on redhat linux - issue with 'container-selinux' and 'selinux-policy' I have Linux on EC2 and trying to install Docker. How to resolve issue with 'container-selinux' and 'selinux-policy'? lsb_release -d Description: Red Hat Enterprise Linux Server release 6.9 (Santiago) sudo rpm -i container-selinux-2.9-4.el7.noarch.rpm warning: container-selinux-2.9-4.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY error: Failed dependencies: selinux-policy >= 3.13.1-39 is needed by container-selinux-2:2.9-4.el7.noarch selinux-policy-base >= 3.13.1-39 is needed by container-selinux-2:2.9-4.el7.noarch selinux-policy-targeted >= 3.13.1-39 is needed by container-selinux-2:2.9-4.el7.noarch
|
linux, docker, containers, redhat
| 2
| 6,900
| 2
|
https://stackoverflow.com/questions/47579041/installing-docker-on-redhat-linux-issue-with-container-selinux-and-selinux
|
47,081,126
|
Deploy Python virtualenv on no internet machine with different system
|
I need to deploy python application to a no internet server. I have created a virtual environment on my host machine which uses Ubuntu. This contains python script with a variety of non-standard libraries. I have used option --relocatable to make the links relative. I have copied over the environment to my client machine which uses RedHat and has no access to the internet. After activating it using source my_project/bin/activate the environment does not seem to be working - the python used is standard system one and the libraries don't work. How can the virtual environment be deployed on a different server? Edit: this is normally done through the creation of requirement.txt file and then using pip to install the libraries on the target machine, however in this case it's not possible as the machine is offline.
|
Deploy Python virtualenv on no internet machine with different system I need to deploy python application to a no internet server. I have created a virtual environment on my host machine which uses Ubuntu. This contains python script with a variety of non-standard libraries. I have used option --relocatable to make the links relative. I have copied over the environment to my client machine which uses RedHat and has no access to the internet. After activating it using source my_project/bin/activate the environment does not seem to be working - the python used is standard system one and the libraries don't work. How can the virtual environment be deployed on a different server? Edit: this is normally done through the creation of requirement.txt file and then using pip to install the libraries on the target machine, however in this case it's not possible as the machine is offline.
|
python, virtualenv, redhat
| 2
| 2,690
| 2
|
https://stackoverflow.com/questions/47081126/deploy-python-virtualenv-on-no-internet-machine-with-different-system
|
44,831,679
|
Configuration error LoadError: cannot load such file -- chef_handler_foreman (require statement in /etc/chef/client.rb)
|
I am trying to register existing chef-nodes to Foreman. I followed: [URL] This tells me to install chef_handler_foreman gem and put the following in /etc/chef/client.rb: require 'chef_handler_foreman' foreman_server_options ' [URL] ' foreman_facts_upload true foreman_reports_upload true foreman_enc true I did both. When I run chef-client , I get: [root@ip-10-139-67-124 chef]# chef-client [2017-06-29T13:25:09-04:00] FATAL: Configuration error LoadError: cannot load such file -- chef_handler_foreman [2017-06-29T13:25:09-04:00] FATAL: /etc/chef/client.rb:4:in `from_string' [2017-06-29T13:25:09-04:00] FATAL: Aborting due to error in '/etc/chef/client.rb' [root@ip-10-139-67-124 chef]# Here is the evidence that I have the gem installed: [root@ip-10-139-67-124 chef]# gem list | grep chef chef_handler_foreman (0.2.0) I am running Redhat 7.3. I have looked into the following question and few others. Answers to those suggest a case-sensitivity problem, which is not the case here: LoadError: cannot load such file -- english What am I doing wrong? Any help is appreciated.
|
Configuration error LoadError: cannot load such file -- chef_handler_foreman (require statement in /etc/chef/client.rb) I am trying to register existing chef-nodes to Foreman. I followed: [URL] This tells me to install chef_handler_foreman gem and put the following in /etc/chef/client.rb: require 'chef_handler_foreman' foreman_server_options ' [URL] ' foreman_facts_upload true foreman_reports_upload true foreman_enc true I did both. When I run chef-client , I get: [root@ip-10-139-67-124 chef]# chef-client [2017-06-29T13:25:09-04:00] FATAL: Configuration error LoadError: cannot load such file -- chef_handler_foreman [2017-06-29T13:25:09-04:00] FATAL: /etc/chef/client.rb:4:in `from_string' [2017-06-29T13:25:09-04:00] FATAL: Aborting due to error in '/etc/chef/client.rb' [root@ip-10-139-67-124 chef]# Here is the evidence that I have the gem installed: [root@ip-10-139-67-124 chef]# gem list | grep chef chef_handler_foreman (0.2.0) I am running Redhat 7.3. I have looked into the following question and few others. Answers to those suggest a case-sensitivity problem, which is not the case here: LoadError: cannot load such file -- english What am I doing wrong? Any help is appreciated.
|
ruby, chef-infra, redhat, chef-recipe, theforeman
| 2
| 1,978
| 1
|
https://stackoverflow.com/questions/44831679/configuration-error-loaderror-cannot-load-such-file-chef-handler-foreman-re
|
41,550,516
|
How to install mariaDB on oracle linux 7
|
I'm trying to install mariaDB on oracle linux 7 but I have this error: I ran this command yum install mariadb mariadb-server mysql to install mariadb and this was the output: --> Finished Dependency Resolution Error: Package: 1:mariadb-5.5.52-1.el7.x86_64 (ol7_latest) Requires: mariadb-libs(x86-64) = 1:5.5.52-1.el7 Available: 1:mariadb-libs-5.5.35-3.el7.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.35-3.el7 Available: 1:mariadb-libs-5.5.37-1.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.37-1.el7_0 Available: 1:mariadb-libs-5.5.40-1.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.40-1.el7_0 Available: 1:mariadb-libs-5.5.40-2.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.40-2.el7_0 Available: 1:mariadb-libs-5.5.41-2.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.41-2.el7_0 Available: 1:mariadb-libs-5.5.44-1.el7_1.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.44-1.el7_1 Available: 1:mariadb-libs-5.5.44-2.0.1.el7.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.44-2.0.1.el7 Available: 1:mariadb-libs-5.5.47-1.el7_2.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.47-1.el7_2 Available: 1:mariadb-libs-5.5.50-1.el7_2.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.50-1.el7_2 Available: 1:mariadb-libs-5.5.52-1.el7.i686 (ol7_latest) ~mariadb-libs(x86-32) = 1:5.5.52-1.el7 Error: Package: 1:mariadb-server-5.5.52-1.el7.x86_64 (ol7_latest) Requires: mariadb-libs(x86-64) = 1:5.5.52-1.el7 Available: 1:mariadb-libs-5.5.35-3.el7.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.35-3.el7 Available: 1:mariadb-libs-5.5.37-1.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.37-1.el7_0 Available: 1:mariadb-libs-5.5.40-1.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.40-1.el7_0 Available: 1:mariadb-libs-5.5.40-2.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.40-2.el7_0 Available: 1:mariadb-libs-5.5.41-2.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.41-2.el7_0 Available: 1:mariadb-libs-5.5.44-1.el7_1.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.44-1.el7_1 Available: 1:mariadb-libs-5.5.44-2.0.1.el7.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.44-2.0.1.el7 Available: 1:mariadb-libs-5.5.47-1.el7_2.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.47-1.el7_2 Available: 1:mariadb-libs-5.5.50-1.el7_2.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.50-1.el7_2 Available: 1:mariadb-libs-5.5.52-1.el7.i686 (ol7_latest) ~mariadb-libs(x86-32) = 1:5.5.52-1.el7 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Do I need to run a different command? or is not possible to install mariadb on oracle linux 7. Thanks in advance
|
How to install mariaDB on oracle linux 7 I'm trying to install mariaDB on oracle linux 7 but I have this error: I ran this command yum install mariadb mariadb-server mysql to install mariadb and this was the output: --> Finished Dependency Resolution Error: Package: 1:mariadb-5.5.52-1.el7.x86_64 (ol7_latest) Requires: mariadb-libs(x86-64) = 1:5.5.52-1.el7 Available: 1:mariadb-libs-5.5.35-3.el7.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.35-3.el7 Available: 1:mariadb-libs-5.5.37-1.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.37-1.el7_0 Available: 1:mariadb-libs-5.5.40-1.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.40-1.el7_0 Available: 1:mariadb-libs-5.5.40-2.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.40-2.el7_0 Available: 1:mariadb-libs-5.5.41-2.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.41-2.el7_0 Available: 1:mariadb-libs-5.5.44-1.el7_1.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.44-1.el7_1 Available: 1:mariadb-libs-5.5.44-2.0.1.el7.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.44-2.0.1.el7 Available: 1:mariadb-libs-5.5.47-1.el7_2.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.47-1.el7_2 Available: 1:mariadb-libs-5.5.50-1.el7_2.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.50-1.el7_2 Available: 1:mariadb-libs-5.5.52-1.el7.i686 (ol7_latest) ~mariadb-libs(x86-32) = 1:5.5.52-1.el7 Error: Package: 1:mariadb-server-5.5.52-1.el7.x86_64 (ol7_latest) Requires: mariadb-libs(x86-64) = 1:5.5.52-1.el7 Available: 1:mariadb-libs-5.5.35-3.el7.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.35-3.el7 Available: 1:mariadb-libs-5.5.37-1.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.37-1.el7_0 Available: 1:mariadb-libs-5.5.40-1.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.40-1.el7_0 Available: 1:mariadb-libs-5.5.40-2.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.40-2.el7_0 Available: 1:mariadb-libs-5.5.41-2.el7_0.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.41-2.el7_0 Available: 1:mariadb-libs-5.5.44-1.el7_1.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.44-1.el7_1 Available: 1:mariadb-libs-5.5.44-2.0.1.el7.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.44-2.0.1.el7 Available: 1:mariadb-libs-5.5.47-1.el7_2.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.47-1.el7_2 Available: 1:mariadb-libs-5.5.50-1.el7_2.x86_64 (ol7_latest) mariadb-libs(x86-64) = 1:5.5.50-1.el7_2 Available: 1:mariadb-libs-5.5.52-1.el7.i686 (ol7_latest) ~mariadb-libs(x86-32) = 1:5.5.52-1.el7 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Do I need to run a different command? or is not possible to install mariadb on oracle linux 7. Thanks in advance
|
centos, mariadb, redhat, rhel
| 2
| 5,584
| 2
|
https://stackoverflow.com/questions/41550516/how-to-install-mariadb-on-oracle-linux-7
|
40,605,538
|
RedHat 7.2 how to get stub-32.h
|
I have a 64bit system running RedHat 7.2, I am trying to build a project that requires stub-32.h This would be located in /usr/include/gnu However my installation has only these files in the above folder: -rw-r--r--. 1 root root 1270 Aug 11 06:56 libc-version.h -rw-r--r--. 1 root root 4844 Aug 11 06:56 lib-names.h -rw-r--r--. 1 root root 604 Aug 11 06:57 stubs-64.h -rw-r--r--. 1 root root 384 Aug 11 06:56 stubs.h I've tried various methods to get stubs-32.h installed but keep coming up against the same problems, if I try: sudo yum install glibc-devel.i686 The result is: Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager Resolving Dependencies --> Running transaction check ---> Package glibc-devel.i686 0:2.17-106.el7_2.8 will be installed --> Processing Dependency: glibc = 2.17-106.el7_2.8 for package: glibc-devel-2.17-106.el7_2.8.i686 --> Processing Dependency: glibc-headers = 2.17-106.el7_2.8 for package: glibc-devel-2.17-106.el7_2.8.i686 --> Finished Dependency Resolution Error: Package: glibc-devel-2.17-106.el7_2.8.i686 (rhel-7-workstation-rpms) Requires: glibc = 2.17-106.el7_2.8 Installed: glibc-2.17-157.el7.i686 (@rhel-7-workstation-rpms) glibc = 2.17-157.el7 Available: glibc-2.17-55.el7.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7 Available: glibc-2.17-55.el7_0.1.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7_0.1 Available: glibc-2.17-55.el7_0.3.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7_0.3 Available: glibc-2.17-55.el7_0.5.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7_0.5 Available: glibc-2.17-78.el7.i686 (rhel-7-workstation-rpms) glibc = 2.17-78.el7 Available: glibc-2.17-105.el7.i686 (rhel-7-workstation-rpms) glibc = 2.17-105.el7 Available: glibc-2.17-106.el7_2.1.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.1 Available: glibc-2.17-106.el7_2.4.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.4 Available: glibc-2.17-106.el7_2.6.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.6 Available: glibc-2.17-106.el7_2.8.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.8 Error: Package: glibc-devel-2.17-106.el7_2.8.i686 (rhel-7-workstation-rpms) Requires: glibc-headers = 2.17-106.el7_2.8 Installed: glibc-headers-2.17-157.el7.x86_64 (@rhel-7-workstation-rpms) glibc-headers = 2.17-157.el7 Available: glibc-headers-2.17-55.el7.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7 Available: glibc-headers-2.17-55.el7_0.1.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7_0.1 Available: glibc-headers-2.17-55.el7_0.3.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7_0.3 Available: glibc-headers-2.17-55.el7_0.5.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7_0.5 Available: glibc-headers-2.17-78.el7.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-78.el7 Available: glibc-headers-2.17-105.el7.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-105.el7 Available: glibc-headers-2.17-106.el7_2.1.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.1 Available: glibc-headers-2.17-106.el7_2.4.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.4 Available: glibc-headers-2.17-106.el7_2.6.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.6 Available: glibc-headers-2.17-106.el7_2.8.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.8 ********************************************************************** yum can be configured to try to resolve such errors by temporarily enabling disabled repos and searching for missing dependencies. To enable this functionality please set 'notify_only=0' in /etc/yum/pluginconf.d/search-disabled-repos.conf ********************************************************************** Error: Package: glibc-devel-2.17-106.el7_2.8.i686 (rhel-7-workstation-rpms) Requires: glibc = 2.17-106.el7_2.8 Installed: glibc-2.17-157.el7.i686 (@rhel-7-workstation-rpms) glibc = 2.17-157.el7 Available: glibc-2.17-55.el7.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7 Available: glibc-2.17-55.el7_0.1.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7_0.1 Available: glibc-2.17-55.el7_0.3.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7_0.3 Available: glibc-2.17-55.el7_0.5.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7_0.5 Available: glibc-2.17-78.el7.i686 (rhel-7-workstation-rpms) glibc = 2.17-78.el7 Available: glibc-2.17-105.el7.i686 (rhel-7-workstation-rpms) glibc = 2.17-105.el7 Available: glibc-2.17-106.el7_2.1.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.1 Available: glibc-2.17-106.el7_2.4.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.4 Available: glibc-2.17-106.el7_2.6.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.6 Available: glibc-2.17-106.el7_2.8.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.8 Error: Package: glibc-devel-2.17-106.el7_2.8.i686 (rhel-7-workstation-rpms) Requires: glibc-headers = 2.17-106.el7_2.8 Installed: glibc-headers-2.17-157.el7.x86_64 (@rhel-7-workstation-rpms) glibc-headers = 2.17-157.el7 Available: glibc-headers-2.17-55.el7.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7 Available: glibc-headers-2.17-55.el7_0.1.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7_0.1 Available: glibc-headers-2.17-55.el7_0.3.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7_0.3 Available: glibc-headers-2.17-55.el7_0.5.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7_0.5 Available: glibc-headers-2.17-78.el7.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-78.el7 Available: glibc-headers-2.17-105.el7.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-105.el7 Available: glibc-headers-2.17-106.el7_2.1.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.1 Available: glibc-headers-2.17-106.el7_2.4.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.4 Available: glibc-headers-2.17-106.el7_2.6.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.6 Available: glibc-headers-2.17-106.el7_2.8.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.8 You could try using --skip-broken to work around the problem ** Found 4 pre-existing rpmdb problem(s), 'yum check' output follows: ipa-client-4.4.0-12.el7.x86_64 has installed conflicts freeipa-client: ipa-client-4.4.0-12.el7.x86_64 ipa-client-common-4.4.0-12.el7.noarch has installed conflicts freeipa-client-common: ipa-client-common-4.4.0-12.el7.noarch ipa-common-4.4.0-12.el7.noarch has installed conflicts freeipa-common: ipa-common-4.4.0-12.el7.noarch ipa-python-compat-4.4.0-12.el7.noarch has installed conflicts freeipa- python-compat: ipa-python-compat-4.4.0-12.el7.noarch How can I resolve this issue?
|
RedHat 7.2 how to get stub-32.h I have a 64bit system running RedHat 7.2, I am trying to build a project that requires stub-32.h This would be located in /usr/include/gnu However my installation has only these files in the above folder: -rw-r--r--. 1 root root 1270 Aug 11 06:56 libc-version.h -rw-r--r--. 1 root root 4844 Aug 11 06:56 lib-names.h -rw-r--r--. 1 root root 604 Aug 11 06:57 stubs-64.h -rw-r--r--. 1 root root 384 Aug 11 06:56 stubs.h I've tried various methods to get stubs-32.h installed but keep coming up against the same problems, if I try: sudo yum install glibc-devel.i686 The result is: Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager Resolving Dependencies --> Running transaction check ---> Package glibc-devel.i686 0:2.17-106.el7_2.8 will be installed --> Processing Dependency: glibc = 2.17-106.el7_2.8 for package: glibc-devel-2.17-106.el7_2.8.i686 --> Processing Dependency: glibc-headers = 2.17-106.el7_2.8 for package: glibc-devel-2.17-106.el7_2.8.i686 --> Finished Dependency Resolution Error: Package: glibc-devel-2.17-106.el7_2.8.i686 (rhel-7-workstation-rpms) Requires: glibc = 2.17-106.el7_2.8 Installed: glibc-2.17-157.el7.i686 (@rhel-7-workstation-rpms) glibc = 2.17-157.el7 Available: glibc-2.17-55.el7.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7 Available: glibc-2.17-55.el7_0.1.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7_0.1 Available: glibc-2.17-55.el7_0.3.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7_0.3 Available: glibc-2.17-55.el7_0.5.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7_0.5 Available: glibc-2.17-78.el7.i686 (rhel-7-workstation-rpms) glibc = 2.17-78.el7 Available: glibc-2.17-105.el7.i686 (rhel-7-workstation-rpms) glibc = 2.17-105.el7 Available: glibc-2.17-106.el7_2.1.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.1 Available: glibc-2.17-106.el7_2.4.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.4 Available: glibc-2.17-106.el7_2.6.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.6 Available: glibc-2.17-106.el7_2.8.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.8 Error: Package: glibc-devel-2.17-106.el7_2.8.i686 (rhel-7-workstation-rpms) Requires: glibc-headers = 2.17-106.el7_2.8 Installed: glibc-headers-2.17-157.el7.x86_64 (@rhel-7-workstation-rpms) glibc-headers = 2.17-157.el7 Available: glibc-headers-2.17-55.el7.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7 Available: glibc-headers-2.17-55.el7_0.1.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7_0.1 Available: glibc-headers-2.17-55.el7_0.3.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7_0.3 Available: glibc-headers-2.17-55.el7_0.5.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7_0.5 Available: glibc-headers-2.17-78.el7.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-78.el7 Available: glibc-headers-2.17-105.el7.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-105.el7 Available: glibc-headers-2.17-106.el7_2.1.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.1 Available: glibc-headers-2.17-106.el7_2.4.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.4 Available: glibc-headers-2.17-106.el7_2.6.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.6 Available: glibc-headers-2.17-106.el7_2.8.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.8 ********************************************************************** yum can be configured to try to resolve such errors by temporarily enabling disabled repos and searching for missing dependencies. To enable this functionality please set 'notify_only=0' in /etc/yum/pluginconf.d/search-disabled-repos.conf ********************************************************************** Error: Package: glibc-devel-2.17-106.el7_2.8.i686 (rhel-7-workstation-rpms) Requires: glibc = 2.17-106.el7_2.8 Installed: glibc-2.17-157.el7.i686 (@rhel-7-workstation-rpms) glibc = 2.17-157.el7 Available: glibc-2.17-55.el7.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7 Available: glibc-2.17-55.el7_0.1.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7_0.1 Available: glibc-2.17-55.el7_0.3.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7_0.3 Available: glibc-2.17-55.el7_0.5.i686 (rhel-7-workstation-rpms) glibc = 2.17-55.el7_0.5 Available: glibc-2.17-78.el7.i686 (rhel-7-workstation-rpms) glibc = 2.17-78.el7 Available: glibc-2.17-105.el7.i686 (rhel-7-workstation-rpms) glibc = 2.17-105.el7 Available: glibc-2.17-106.el7_2.1.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.1 Available: glibc-2.17-106.el7_2.4.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.4 Available: glibc-2.17-106.el7_2.6.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.6 Available: glibc-2.17-106.el7_2.8.i686 (rhel-7-workstation-rpms) glibc = 2.17-106.el7_2.8 Error: Package: glibc-devel-2.17-106.el7_2.8.i686 (rhel-7-workstation-rpms) Requires: glibc-headers = 2.17-106.el7_2.8 Installed: glibc-headers-2.17-157.el7.x86_64 (@rhel-7-workstation-rpms) glibc-headers = 2.17-157.el7 Available: glibc-headers-2.17-55.el7.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7 Available: glibc-headers-2.17-55.el7_0.1.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7_0.1 Available: glibc-headers-2.17-55.el7_0.3.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7_0.3 Available: glibc-headers-2.17-55.el7_0.5.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-55.el7_0.5 Available: glibc-headers-2.17-78.el7.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-78.el7 Available: glibc-headers-2.17-105.el7.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-105.el7 Available: glibc-headers-2.17-106.el7_2.1.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.1 Available: glibc-headers-2.17-106.el7_2.4.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.4 Available: glibc-headers-2.17-106.el7_2.6.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.6 Available: glibc-headers-2.17-106.el7_2.8.x86_64 (rhel-7-workstation-rpms) glibc-headers = 2.17-106.el7_2.8 You could try using --skip-broken to work around the problem ** Found 4 pre-existing rpmdb problem(s), 'yum check' output follows: ipa-client-4.4.0-12.el7.x86_64 has installed conflicts freeipa-client: ipa-client-4.4.0-12.el7.x86_64 ipa-client-common-4.4.0-12.el7.noarch has installed conflicts freeipa-client-common: ipa-client-common-4.4.0-12.el7.noarch ipa-common-4.4.0-12.el7.noarch has installed conflicts freeipa-common: ipa-common-4.4.0-12.el7.noarch ipa-python-compat-4.4.0-12.el7.noarch has installed conflicts freeipa- python-compat: ipa-python-compat-4.4.0-12.el7.noarch How can I resolve this issue?
|
c++, redhat
| 2
| 2,543
| 1
|
https://stackoverflow.com/questions/40605538/redhat-7-2-how-to-get-stub-32-h
|
39,080,182
|
Linux find command and copy and rename them same time
|
Will you be able to help me to write a script, I just want to find log files over 2GB and copy them to archive folder in same directory.I just write a find command it is not working, appreciate if someone could help me. ex - main log folders - /vsapp/logs/ - app1,app2,app3 there are lot of logs in the app1, app2 and app3 folders. so i want to find the logs in the logs folder which is over 2GB, and copy them to archive folder with the different name with today's date. ex - abcd.log -----copy to -----> abcd.log-08-22-2016 My command at the moment which is not working find $i/* -type f -size +2G -exec cp '{}' $i/$arc/{}-$date
|
Linux find command and copy and rename them same time Will you be able to help me to write a script, I just want to find log files over 2GB and copy them to archive folder in same directory.I just write a find command it is not working, appreciate if someone could help me. ex - main log folders - /vsapp/logs/ - app1,app2,app3 there are lot of logs in the app1, app2 and app3 folders. so i want to find the logs in the logs folder which is over 2GB, and copy them to archive folder with the different name with today's date. ex - abcd.log -----copy to -----> abcd.log-08-22-2016 My command at the moment which is not working find $i/* -type f -size +2G -exec cp '{}' $i/$arc/{}-$date
|
linux, bash, shell, redhat
| 2
| 3,507
| 1
|
https://stackoverflow.com/questions/39080182/linux-find-command-and-copy-and-rename-them-same-time
|
38,974,677
|
Eclipse UI tests fail with gtk_init_check() on a Redhat Jenkins server
|
When running Tycho UI tests for Eclipse on a Redhat server (6.7) via Jenkins, the Exception below occurs. I know that a graphical subsystem must be installed and running, but there seems to be something amiss with my setup. I already installed GTK via "yum groupinstall Desktop". org.eclipse.swt.SWTError: No more handles [gtk_init_check() failed] at org.eclipse.swt.SWT.error(SWT.java:4517) at org.eclipse.swt.widgets.Display.createDisplay(Display.java:908) at org.eclipse.swt.widgets.Display.create(Display.java:892) at org.eclipse.swt.graphics.Device.<init>(Device.java:156) at org.eclipse.swt.widgets.Display.<init>(Display.java:512) at org.eclipse.swt.widgets.Display.<init>(Display.java:503) at org.eclipse.ui.internal.Workbench.createDisplay(Workbench.java:790) at org.eclipse.ui.PlatformUI.createDisplay(PlatformUI.java:162) at org.eclipse.ui.internal.ide.application.IDEApplication.createDisplay(IDEApplication.java:169) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:111) at org.eclipse.tycho.surefire.osgibooter.UITestApplication.runApplication(UITestApplication.java:31) at org.eclipse.tycho.surefire.osgibooter.AbstractUITestApplication.run(AbstractUITestApplication.java:115) at org.eclipse.tycho.surefire.osgibooter.UITestApplication.start(UITestApplication.java:37) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:380) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:235) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:669) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:608) at org.eclipse.equinox.launcher.Main.run(Main.java:1515) at org.eclipse.equinox.launcher.Main.main(Main.java:1488)
|
Eclipse UI tests fail with gtk_init_check() on a Redhat Jenkins server When running Tycho UI tests for Eclipse on a Redhat server (6.7) via Jenkins, the Exception below occurs. I know that a graphical subsystem must be installed and running, but there seems to be something amiss with my setup. I already installed GTK via "yum groupinstall Desktop". org.eclipse.swt.SWTError: No more handles [gtk_init_check() failed] at org.eclipse.swt.SWT.error(SWT.java:4517) at org.eclipse.swt.widgets.Display.createDisplay(Display.java:908) at org.eclipse.swt.widgets.Display.create(Display.java:892) at org.eclipse.swt.graphics.Device.<init>(Device.java:156) at org.eclipse.swt.widgets.Display.<init>(Display.java:512) at org.eclipse.swt.widgets.Display.<init>(Display.java:503) at org.eclipse.ui.internal.Workbench.createDisplay(Workbench.java:790) at org.eclipse.ui.PlatformUI.createDisplay(PlatformUI.java:162) at org.eclipse.ui.internal.ide.application.IDEApplication.createDisplay(IDEApplication.java:169) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:111) at org.eclipse.tycho.surefire.osgibooter.UITestApplication.runApplication(UITestApplication.java:31) at org.eclipse.tycho.surefire.osgibooter.AbstractUITestApplication.run(AbstractUITestApplication.java:115) at org.eclipse.tycho.surefire.osgibooter.UITestApplication.start(UITestApplication.java:37) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:380) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:235) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:669) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:608) at org.eclipse.equinox.launcher.Main.run(Main.java:1515) at org.eclipse.equinox.launcher.Main.main(Main.java:1488)
|
eclipse, jenkins, redhat
| 2
| 2,197
| 1
|
https://stackoverflow.com/questions/38974677/eclipse-ui-tests-fail-with-gtk-init-check-on-a-redhat-jenkins-server
|
35,762,229
|
Openshift Nodejs Socket.io issue, But 200 Ok response
|
I have deployed below code in OpenShift Cloud platform by Red-hat for NodeJs chat application, I am not getting any error in Console(F12) and response code as Ok 200..but the application is not working Server(you can find complete source at [URL] ) var express = require('express'); var app = express(); var server = require('http').createServer(app); var io = require('socket.io').listen(server, { origins:'[URL] }); app.get('/', function (req, res) { res.sendfile('index.html'); }); io.on('connection', function (socket) { socket.on('chatmessage', function (msg) { console.log('index.js(socket.on)==' + msg); io.emit('chatmessage', msg); }); }); server.listen(process.env.OPENSHIFT_NODEJS_PORT, process.env.OPENSHIFT_NODEJS_IP); Client(you can find complete source at [URL] ) src="[URL] src="[URL] var socket = io.connect('[URL] $('button').click(function (e) { console.log('index.html($(button).click)=' + $('#m').val()); socket.emit('chatmessage', $('#m').val()); $('#m').val(''); return false; }); socket.on('chatmessage', function (msg) { console.log('index.html(socket.on)==' + msg); $('#messages').append($('<li>').text(msg)); }); Html body is <ul id="messages"></ul> <form action=""> <input id="m" autocomplete="off" /> <button>Send</button> </form>
|
Openshift Nodejs Socket.io issue, But 200 Ok response I have deployed below code in OpenShift Cloud platform by Red-hat for NodeJs chat application, I am not getting any error in Console(F12) and response code as Ok 200..but the application is not working Server(you can find complete source at [URL] ) var express = require('express'); var app = express(); var server = require('http').createServer(app); var io = require('socket.io').listen(server, { origins:'[URL] }); app.get('/', function (req, res) { res.sendfile('index.html'); }); io.on('connection', function (socket) { socket.on('chatmessage', function (msg) { console.log('index.js(socket.on)==' + msg); io.emit('chatmessage', msg); }); }); server.listen(process.env.OPENSHIFT_NODEJS_PORT, process.env.OPENSHIFT_NODEJS_IP); Client(you can find complete source at [URL] ) src="[URL] src="[URL] var socket = io.connect('[URL] $('button').click(function (e) { console.log('index.html($(button).click)=' + $('#m').val()); socket.emit('chatmessage', $('#m').val()); $('#m').val(''); return false; }); socket.on('chatmessage', function (msg) { console.log('index.html(socket.on)==' + msg); $('#messages').append($('<li>').text(msg)); }); Html body is <ul id="messages"></ul> <form action=""> <input id="m" autocomplete="off" /> <button>Send</button> </form>
|
node.js, socket.io, openshift, redhat
| 2
| 1,328
| 1
|
https://stackoverflow.com/questions/35762229/openshift-nodejs-socket-io-issue-but-200-ok-response
|
35,087,821
|
Accessing MongoDB from Windows & Mac Client Machines
|
I have MongoDB 3.2 installed on my Linux Red Hat server. I am starting to access it and looking at the mongo Shell instructions. For a Windows machine, the instructions want me to get to the command prompt and change dirs to the installation directory. The problem is, MongoDB is installed on my web server and not my local windows machine. Question: does Mongo Shell apply to me then? How do I start using, connecting and accessing Mongo from my Windows and Mac machines? [Note: I am a traditional MySQL / phpMyAdmin developer looking to advance to MongoDB] Amendments: (1) With the help of @AlexBlex I am progressing to trying to connect to my MongoDB on my server from Robomongo on my windows client. I get the following error when trying to setup my connection. I tried the address with just my server ip and with [URL] server ip}. Neither worked. See screen shot of error (2) This is what I have in my current mongod.conf file: #port=27017 bind_ip=127.0.0.1 (3) here is what my connection settings look like. Oddly, @AlexBlex's solution below shows an SSH tab on his Mac version. The Windows and Mac versions I just installed lacks that tab.
|
Accessing MongoDB from Windows & Mac Client Machines I have MongoDB 3.2 installed on my Linux Red Hat server. I am starting to access it and looking at the mongo Shell instructions. For a Windows machine, the instructions want me to get to the command prompt and change dirs to the installation directory. The problem is, MongoDB is installed on my web server and not my local windows machine. Question: does Mongo Shell apply to me then? How do I start using, connecting and accessing Mongo from my Windows and Mac machines? [Note: I am a traditional MySQL / phpMyAdmin developer looking to advance to MongoDB] Amendments: (1) With the help of @AlexBlex I am progressing to trying to connect to my MongoDB on my server from Robomongo on my windows client. I get the following error when trying to setup my connection. I tried the address with just my server ip and with [URL] server ip}. Neither worked. See screen shot of error (2) This is what I have in my current mongod.conf file: #port=27017 bind_ip=127.0.0.1 (3) here is what my connection settings look like. Oddly, @AlexBlex's solution below shows an SSH tab on his Mac version. The Windows and Mac versions I just installed lacks that tab.
|
linux, windows, macos, mongodb, redhat
| 2
| 4,794
| 3
|
https://stackoverflow.com/questions/35087821/accessing-mongodb-from-windows-mac-client-machines
|
34,955,707
|
Is there a better way to detect that my Perl script was called from "firstboot"?
|
I have a Perl script that needs to act in a particular way if it was invoked by the firstboot script or invoked by a process that firstboot spawned. I have this routine handleFirstBoot and it seems to work ok, but there is probably better way to write this routine. So please take a look ... sub handleFirstBoot { my $child_id = shift || $$; my $parent_id; foreach (ps -ef) { my ($uid,$pid,$ppid) = split; next unless ($pid eq $child_id); $parent_id = $ppid; last; } if ( $parent_id == 0 ) { debug "firstboot is NOT an ancestor.\n"; return; } my $psout = ps -p $parent_id | tail -1 |sed -e's/^ //g'| sed -e's/ */ /g'|cut -d' ' -f4; if ( $psout =~ /firstboot/ ) { debug "firstboot IS an ancestor. Set post option.\n"; $opt{'post'} = 1; return; } else { # recursive case handleFirstBoot($parent_id); } }
|
Is there a better way to detect that my Perl script was called from "firstboot"? I have a Perl script that needs to act in a particular way if it was invoked by the firstboot script or invoked by a process that firstboot spawned. I have this routine handleFirstBoot and it seems to work ok, but there is probably better way to write this routine. So please take a look ... sub handleFirstBoot { my $child_id = shift || $$; my $parent_id; foreach (ps -ef) { my ($uid,$pid,$ppid) = split; next unless ($pid eq $child_id); $parent_id = $ppid; last; } if ( $parent_id == 0 ) { debug "firstboot is NOT an ancestor.\n"; return; } my $psout = ps -p $parent_id | tail -1 |sed -e's/^ //g'| sed -e's/ */ /g'|cut -d' ' -f4; if ( $psout =~ /firstboot/ ) { debug "firstboot IS an ancestor. Set post option.\n"; $opt{'post'} = 1; return; } else { # recursive case handleFirstBoot($parent_id); } }
|
linux, perl, redhat
| 2
| 93
| 1
|
https://stackoverflow.com/questions/34955707/is-there-a-better-way-to-detect-that-my-perl-script-was-called-from-firstboot
|
34,509,500
|
Difference of two files in linux
|
I am getting difference of two files from commandline when I included same line in the test.sh it is shown error Syntax comm -2 -3 <(sort user_list.csv) <(sort full_user_list.csv) > uniq_list.csv' . Error syntax error near unexpected token (' test.sh: line 9: comm -2 -3 <(sort user_list.csv) <(sort ull_user_list.csv) > uniq_list.csv'
|
Difference of two files in linux I am getting difference of two files from commandline when I included same line in the test.sh it is shown error Syntax comm -2 -3 <(sort user_list.csv) <(sort full_user_list.csv) > uniq_list.csv' . Error syntax error near unexpected token (' test.sh: line 9: comm -2 -3 <(sort user_list.csv) <(sort ull_user_list.csv) > uniq_list.csv'
|
linux, redhat
| 2
| 132
| 1
|
https://stackoverflow.com/questions/34509500/difference-of-two-files-in-linux
|
32,021,976
|
Remove an Openshift server from RHC
|
I'm very confused about this rhc client. I always read about cartridge and apps on tutorials over the interwebs, but what I see when i try to setup rhc e a server . I setup a server in this computer a few months ago. But I never used it. Now I created a new and personal account and I wanted to setup this computer to use rhc but my last setup is stuck there. I tried to rund rhc server remove <server-name> but it says the server is already in use. Asks me to change servers. But I only got one. When I try to add a new server with the url openshift.redhat.com it says it is already setup. Of course. What do I have to do to remove the current installation and install a new server in this computer? And what exactly is an app and a cartridge in Openshift context? Is there a different way to upload stuff to OpenShift without configuring RHC?!
|
Remove an Openshift server from RHC I'm very confused about this rhc client. I always read about cartridge and apps on tutorials over the interwebs, but what I see when i try to setup rhc e a server . I setup a server in this computer a few months ago. But I never used it. Now I created a new and personal account and I wanted to setup this computer to use rhc but my last setup is stuck there. I tried to rund rhc server remove <server-name> but it says the server is already in use. Asks me to change servers. But I only got one. When I try to add a new server with the url openshift.redhat.com it says it is already setup. Of course. What do I have to do to remove the current installation and install a new server in this computer? And what exactly is an app and a cartridge in Openshift context? Is there a different way to upload stuff to OpenShift without configuring RHC?!
|
git, ssh, openshift, redhat, ssh-keys
| 2
| 467
| 1
|
https://stackoverflow.com/questions/32021976/remove-an-openshift-server-from-rhc
|
28,538,879
|
Not all storage is available for Amazon EBS
|
I'm sure the problem appears because of my misunderstanding of Ec2+EBS configuring, so answer might be very simple. I've created RedHat ec2 instance on the Amazon WS with 30GB EBS storage. But lsblk shows me, that only 6GB of total 30 is available for me: xvda 202:0 0 30G 0 disk └─xvda1 202:1 0 6G 0 part / How can I mount all remaining storage space to my instance? [UPDATE] commands output: mount : /dev/xvda1 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sudo fdisk -l /dev/xvda : WARNING: GPT (GUID Partition Table) detected on '/dev/xvda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/xvda: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/xvda1 1 1306 10485759+ ee GPT resize2fs /dev/xvda1 : resize2fs 1.41.12 (17-May-2010) The filesystem is already 1572864 blocks long. Nothing to do!
|
Not all storage is available for Amazon EBS I'm sure the problem appears because of my misunderstanding of Ec2+EBS configuring, so answer might be very simple. I've created RedHat ec2 instance on the Amazon WS with 30GB EBS storage. But lsblk shows me, that only 6GB of total 30 is available for me: xvda 202:0 0 30G 0 disk └─xvda1 202:1 0 6G 0 part / How can I mount all remaining storage space to my instance? [UPDATE] commands output: mount : /dev/xvda1 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sudo fdisk -l /dev/xvda : WARNING: GPT (GUID Partition Table) detected on '/dev/xvda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/xvda: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/xvda1 1 1306 10485759+ ee GPT resize2fs /dev/xvda1 : resize2fs 1.41.12 (17-May-2010) The filesystem is already 1572864 blocks long. Nothing to do!
|
amazon-web-services, linux-device-driver, redhat, amazon-ebs
| 2
| 663
| 2
|
https://stackoverflow.com/questions/28538879/not-all-storage-is-available-for-amazon-ebs
|
27,646,266
|
very high memory page in rates in the database server
|
Very high page in rates obsorved in the database server, server environments and observations are as listed below: Server Environment: OS Release - Red Hat Enterprise Linux Server release 6.6 (Santiago)/ System Info - Linux database.esewa.com.np 2.6.32-504.1.3.el6.x86_64 #1 SMP Fri Oct 31 11:37:10 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux RAM - 32G(22G=MYSQL, 2GB=MEMCACHE and rest is given to OS) HW - 2Sockets - Intel(R) Xeon(R) CPU X5650 @ 2.67GHz Storage - 10K RPM disks with RAID 10 in disk bays Observation against MEMORY PAGE IN Assuming that PAGE IN value should be 0 or low and hitting greater than 25 indicates is very high or memory under pressure and may be precursor to swapping. I have obsorved very high page(Maximum-180) in rates in the server but didn't see any memory process swap queue. Also noticed mostly 99% IO utilization though other metrics are normal(avg-cpu: %user(10.47) %nice(0.00) %system(0.63) %iowait(5.26) %steal(0.00) %idle(83.64)) Questions: Is the assumption is reasonable in this context? Did we allocate more memory to the the application(i.e. 22G-MYSQL and 2GB MEMCACHE)? Anybody see issue with the combination of MYSQL and MEMCACHE for very high page(Max-180) in rates? Can hugepages help to address this issue? Behavior of hitting device utilization(%util of iostat) closer to 99% most of the time is acceptable in this context? I'd appreciate if anyone provide constructive and critical answer for above questions. Thanks in advance.
|
very high memory page in rates in the database server Very high page in rates obsorved in the database server, server environments and observations are as listed below: Server Environment: OS Release - Red Hat Enterprise Linux Server release 6.6 (Santiago)/ System Info - Linux database.esewa.com.np 2.6.32-504.1.3.el6.x86_64 #1 SMP Fri Oct 31 11:37:10 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux RAM - 32G(22G=MYSQL, 2GB=MEMCACHE and rest is given to OS) HW - 2Sockets - Intel(R) Xeon(R) CPU X5650 @ 2.67GHz Storage - 10K RPM disks with RAID 10 in disk bays Observation against MEMORY PAGE IN Assuming that PAGE IN value should be 0 or low and hitting greater than 25 indicates is very high or memory under pressure and may be precursor to swapping. I have obsorved very high page(Maximum-180) in rates in the server but didn't see any memory process swap queue. Also noticed mostly 99% IO utilization though other metrics are normal(avg-cpu: %user(10.47) %nice(0.00) %system(0.63) %iowait(5.26) %steal(0.00) %idle(83.64)) Questions: Is the assumption is reasonable in this context? Did we allocate more memory to the the application(i.e. 22G-MYSQL and 2GB MEMCACHE)? Anybody see issue with the combination of MYSQL and MEMCACHE for very high page(Max-180) in rates? Can hugepages help to address this issue? Behavior of hitting device utilization(%util of iostat) closer to 99% most of the time is acceptable in this context? I'd appreciate if anyone provide constructive and critical answer for above questions. Thanks in advance.
|
mysql, linux, performance, redhat
| 2
| 1,626
| 1
|
https://stackoverflow.com/questions/27646266/very-high-memory-page-in-rates-in-the-database-server
|
27,623,681
|
To Automate MySQL dump
|
We wrote a simple script which backup's mysql dump and make a zip of dump file. Please find the script. #!/bin/sh now="$(date +'%d_%m_%Y_%H_%M_%S')" mysqldump -u Testuser -pTest123123## Testdbuser > mysqlnew.sql mv mysqlnew.* dbbackup-$now.sql zip -r dbbackup-$now.sql.zip dbbackup-$now.sql The above script will take backup and rename the dump file but not able to zip the dump file getting error while zipping. If i run the above zip command in bash it will execute. Please find the below error. *.sql zip warning: name not matched: dbbackup-23_12_2014_15_29_40.sql) zip . -i dbbackup-23_12_2014_15_29_40ackup-23_12_2014_15_29_40
|
To Automate MySQL dump We wrote a simple script which backup's mysql dump and make a zip of dump file. Please find the script. #!/bin/sh now="$(date +'%d_%m_%Y_%H_%M_%S')" mysqldump -u Testuser -pTest123123## Testdbuser > mysqlnew.sql mv mysqlnew.* dbbackup-$now.sql zip -r dbbackup-$now.sql.zip dbbackup-$now.sql The above script will take backup and rename the dump file but not able to zip the dump file getting error while zipping. If i run the above zip command in bash it will execute. Please find the below error. *.sql zip warning: name not matched: dbbackup-23_12_2014_15_29_40.sql) zip . -i dbbackup-23_12_2014_15_29_40ackup-23_12_2014_15_29_40
|
mysql, linux, linux-kernel, redhat
| 2
| 1,635
| 1
|
https://stackoverflow.com/questions/27623681/to-automate-mysql-dump
|
24,785,310
|
R include directory is empty
|
I have RedHat 6.5 (x86_64-redhat-linux-gnu) running R version 3.0.2 (2013-09-25) . As explained in this SO question , some packages install fine, while others produce the warning "R include directory is empty -- perhaps need to install R-devel.rpm or similar". When this warning appears, I also get make: gcc: Command not found and the package fails to compile. The answer is apparently to install the "development headers", but I am not sure what this means. The accepted answer does not explain it. I tried sudo yum install R-devel , but I get some errors related to dependencies. Error: Package: rstudio-0.95.265-1.x86_64 (@oit-el-6-x86_64/6.3) Requires: libRblas.so()(64bit) Removing: R-core-3.0.2-1.el6.x86_64 (@oit-stable-epel-x86_64-6) libRblas.so()(64bit) Updated By: R-core-3.1.0-5.el6.x86_64 (oit-testing-epel-x86_64-6) Not found ... Error: Package: rstudio-0.95.265-1.x86_64 (@oit-el-6-x86_64/6.3) Requires: libRlapack.so()(64bit) Removing: R-core-3.0.2-1.el6.x86_64 (@oit-stable-epel-x86_64-6) libRlapack.so()(64bit) Updated By: R-core-3.1.0-5.el6.x86_64 (oit-testing-epel-x86_64-6) Not found ... I'm not sure what this means. New to Linux.
|
R include directory is empty I have RedHat 6.5 (x86_64-redhat-linux-gnu) running R version 3.0.2 (2013-09-25) . As explained in this SO question , some packages install fine, while others produce the warning "R include directory is empty -- perhaps need to install R-devel.rpm or similar". When this warning appears, I also get make: gcc: Command not found and the package fails to compile. The answer is apparently to install the "development headers", but I am not sure what this means. The accepted answer does not explain it. I tried sudo yum install R-devel , but I get some errors related to dependencies. Error: Package: rstudio-0.95.265-1.x86_64 (@oit-el-6-x86_64/6.3) Requires: libRblas.so()(64bit) Removing: R-core-3.0.2-1.el6.x86_64 (@oit-stable-epel-x86_64-6) libRblas.so()(64bit) Updated By: R-core-3.1.0-5.el6.x86_64 (oit-testing-epel-x86_64-6) Not found ... Error: Package: rstudio-0.95.265-1.x86_64 (@oit-el-6-x86_64/6.3) Requires: libRlapack.so()(64bit) Removing: R-core-3.0.2-1.el6.x86_64 (@oit-stable-epel-x86_64-6) libRlapack.so()(64bit) Updated By: R-core-3.1.0-5.el6.x86_64 (oit-testing-epel-x86_64-6) Not found ... I'm not sure what this means. New to Linux.
|
linux, r, redhat
| 2
| 4,468
| 2
|
https://stackoverflow.com/questions/24785310/r-include-directory-is-empty
|
24,697,965
|
Configuration Error
|
Im Trying to Run Terracotta server and Below is the Configuration file. <?xml version="1.0" encoding="UTF-8" ?> <tc:tc-config xmlns:tc="[URL] xmlns:xsi="[URL] xsi:schemaLocation="[URL] <servers> <server host="localhost" name="master"> <!-- Specify the path where the server should store its data. --> <data>/x01/terracotta/masterServerData</data> <!-- Specify the port where the server should listen for client traffic. --> <tsa-port>9510</tsa-port> <jmx-port>9520</jmx-port> <tsa-group-port>9530</tsa-group-port> <!-- Enable BigMemory on the server. --> <dataStorage size="4g"> <offheap size="2g"/> <!-- Hybrid storage is optional. --> <hybrid/> </dataStorage> </server> <!-- Add the restartable element for Fast Restartability (optional). --> <restartable enabled="true"/> </servers> <clients> <logs>logs-%i</logs> </clients> </tc:tc-config> Then I got Following Error. Fatal Terracotta startup exception: ******************************************************************************* The configuration data in the base configuration from file at '/x01/terracotta/terracotta-3.7.7/tc-config.xml' does not obey the Terracotta schema: [0]: Line 11, column 12: Expected elements 'authentication http-authentication index logs data-backup statistics dso-port jmx-port l2-group-port dso security' instead of 'tsa-port' here in element server [1]: Line 13, column 12: Expected elements 'authentication http-authentication index logs data-backup statistics dso-port l2-group-port dso security' instead of 'tsa-group-port' here in element server [2]: Line 15, column 12: Expected elements 'authentication http-authentication index logs data-backup statistics dso-port l2-group-port dso security' instead of 'dataStorage' here in element server [3]: Line 22, column 9: Expected elements 'server mirror-groups ha update-check' instead of 'restartable' here in element servers ******************************************************************************* Can Anyone Please help me with this.
|
Configuration Error Im Trying to Run Terracotta server and Below is the Configuration file. <?xml version="1.0" encoding="UTF-8" ?> <tc:tc-config xmlns:tc="[URL] xmlns:xsi="[URL] xsi:schemaLocation="[URL] <servers> <server host="localhost" name="master"> <!-- Specify the path where the server should store its data. --> <data>/x01/terracotta/masterServerData</data> <!-- Specify the port where the server should listen for client traffic. --> <tsa-port>9510</tsa-port> <jmx-port>9520</jmx-port> <tsa-group-port>9530</tsa-group-port> <!-- Enable BigMemory on the server. --> <dataStorage size="4g"> <offheap size="2g"/> <!-- Hybrid storage is optional. --> <hybrid/> </dataStorage> </server> <!-- Add the restartable element for Fast Restartability (optional). --> <restartable enabled="true"/> </servers> <clients> <logs>logs-%i</logs> </clients> </tc:tc-config> Then I got Following Error. Fatal Terracotta startup exception: ******************************************************************************* The configuration data in the base configuration from file at '/x01/terracotta/terracotta-3.7.7/tc-config.xml' does not obey the Terracotta schema: [0]: Line 11, column 12: Expected elements 'authentication http-authentication index logs data-backup statistics dso-port jmx-port l2-group-port dso security' instead of 'tsa-port' here in element server [1]: Line 13, column 12: Expected elements 'authentication http-authentication index logs data-backup statistics dso-port l2-group-port dso security' instead of 'tsa-group-port' here in element server [2]: Line 15, column 12: Expected elements 'authentication http-authentication index logs data-backup statistics dso-port l2-group-port dso security' instead of 'dataStorage' here in element server [3]: Line 22, column 9: Expected elements 'server mirror-groups ha update-check' instead of 'restartable' here in element servers ******************************************************************************* Can Anyone Please help me with this.
|
redhat, terracotta
| 2
| 398
| 1
|
https://stackoverflow.com/questions/24697965/configuration-error
|
20,665,478
|
Can't compile when including ucontext.h
|
Using gcc, I get these errors when compiling something that makes use of ucontext.h /usr/include/sys/ucontext.h: At top level: /usr/include/sys/ucontext.h:138: error: expected identifier or ‘(’ before numeric constant /usr/include/sys/ucontext.h:139: error: expected ‘;’ before ‘stack_t’ Looking at ucontext.h, this is what seems to cause: 134 /* Userlevel context. */ 135 typedef struct ucontext 136 { 137 unsigned long int uc_flags; 138 struct ucontext *uc_link; 139 stack_t uc_stack; 140 mcontext_t uc_mcontext; 141 __sigset_t uc_sigmask; 142 struct _libc_fpstate __fpregs_mem; 143 } ucontext_t; How could line 138 and 139 raise these errors? Don't know what to do since this is a standard sys header.
|
Can't compile when including ucontext.h Using gcc, I get these errors when compiling something that makes use of ucontext.h /usr/include/sys/ucontext.h: At top level: /usr/include/sys/ucontext.h:138: error: expected identifier or ‘(’ before numeric constant /usr/include/sys/ucontext.h:139: error: expected ‘;’ before ‘stack_t’ Looking at ucontext.h, this is what seems to cause: 134 /* Userlevel context. */ 135 typedef struct ucontext 136 { 137 unsigned long int uc_flags; 138 struct ucontext *uc_link; 139 stack_t uc_stack; 140 mcontext_t uc_mcontext; 141 __sigset_t uc_sigmask; 142 struct _libc_fpstate __fpregs_mem; 143 } ucontext_t; How could line 138 and 139 raise these errors? Don't know what to do since this is a standard sys header.
|
c, linux, redhat
| 2
| 2,237
| 1
|
https://stackoverflow.com/questions/20665478/cant-compile-when-including-ucontext-h
|
20,054,960
|
Hold Value to a variable inside the linux bash do loop
|
I need to define a global variable for further refrence This is my code # .bashrc LOCAL_CONF_DIR='/var/www/vhosts/vhost/test.conf' cat ${LOCAL_CONF_DIR} | while read LINE do if [ "ServerName" == "${LINE:0:10}" ]; then s=( $LINE ) SERVER_NAME=$s[1]; fi done echo $SERVER_NAME Doesn't work for me RHE Linux 6.0 many thanks ;)
|
Hold Value to a variable inside the linux bash do loop I need to define a global variable for further refrence This is my code # .bashrc LOCAL_CONF_DIR='/var/www/vhosts/vhost/test.conf' cat ${LOCAL_CONF_DIR} | while read LINE do if [ "ServerName" == "${LINE:0:10}" ]; then s=( $LINE ) SERVER_NAME=$s[1]; fi done echo $SERVER_NAME Doesn't work for me RHE Linux 6.0 many thanks ;)
|
linux, bash, redhat
| 2
| 291
| 1
|
https://stackoverflow.com/questions/20054960/hold-value-to-a-variable-inside-the-linux-bash-do-loop
|
18,059,218
|
Vnc viewer:The connection closed unexpectedly
|
Vnc server is running on RHEL and I'm trying to access it from Windows-XP using vnc viewer. When I try to connect to it using ip-address:2 , I can connect to it. However when I try to connect it using ip-address:4 , I'm getting following message: The connection closed unexpectedly. Do you wish to attempt to reconnect to ip-address:4. Can anybody please help me to resolve above issue?
|
Vnc viewer:The connection closed unexpectedly Vnc server is running on RHEL and I'm trying to access it from Windows-XP using vnc viewer. When I try to connect to it using ip-address:2 , I can connect to it. However when I try to connect it using ip-address:4 , I'm getting following message: The connection closed unexpectedly. Do you wish to attempt to reconnect to ip-address:4. Can anybody please help me to resolve above issue?
|
unix, redhat, vnc, windows-firewall
| 2
| 30,856
| 1
|
https://stackoverflow.com/questions/18059218/vnc-viewerthe-connection-closed-unexpectedly
|
17,714,569
|
C++11 on Cloud9 IDE
|
When I run g++ --version on in my Cloud9 terminal I get g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3) . This is a fairly old version - old enough that when I try to use C++11 library features like std::unordered_set , I get: "This file requires compiler and library support for the upcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options." I'm not really okay with this, because I don't like having to worry about what features I'm allowed to use and which ones I need to avoid. So I went looking around for how to update g++ to the latest stable version (which seems to be 4.8.1 as of this writing), but I can't figure out how to do it. I tried apt-get , but I just got an error: "Sorry, apt-get is not supported on this system. Try c9pm instead." . Well I tried that, but c9pm list (which is supposed to "List available packages" ) doesn't show anything that looks like g++. So I'm lost. How do I install g++ 4.8.1 on Cloud9? When I run lsb_release -a I see that Cloud9 IDE currently runs on "Red Hat Enterprise Linux Server release 6.4 (Santiago)" .
|
C++11 on Cloud9 IDE When I run g++ --version on in my Cloud9 terminal I get g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3) . This is a fairly old version - old enough that when I try to use C++11 library features like std::unordered_set , I get: "This file requires compiler and library support for the upcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options." I'm not really okay with this, because I don't like having to worry about what features I'm allowed to use and which ones I need to avoid. So I went looking around for how to update g++ to the latest stable version (which seems to be 4.8.1 as of this writing), but I can't figure out how to do it. I tried apt-get , but I just got an error: "Sorry, apt-get is not supported on this system. Try c9pm instead." . Well I tried that, but c9pm list (which is supposed to "List available packages" ) doesn't show anything that looks like g++. So I'm lost. How do I install g++ 4.8.1 on Cloud9? When I run lsb_release -a I see that Cloud9 IDE currently runs on "Red Hat Enterprise Linux Server release 6.4 (Santiago)" .
|
c++11, g++, redhat, cloud9-ide, g++4.8
| 2
| 2,213
| 3
|
https://stackoverflow.com/questions/17714569/c11-on-cloud9-ide
|
16,334,915
|
Cannot compile gearman - configure script fails
|
My system is Red Hat Enterprise Linux Server release 5.7 (Tikanga). I am trying to run the configure script , and I am getting the following error: checking for the toolset name used by Boost for g++... gcc41 -gcc configure: Detected BOOST_ROOT; continuing with --with-boost=/raid/users/andrey/3rdParty/boost_1_47/ checking for Boost headers version >= 1.39.0... /users/andrey/3rdParty/boost_1_47/ checking for Boost's header version... 1_47 checking boost/program_options.hpp usability... no checking boost/program_options.hpp presence... no checking for boost/program_options.hpp... no configure: error: cannot find boost/program_options.hpp The documentation of configure says that boost is an optional package. So I tried to build it without boost: configure -with-boost=no This does not run as well and returns the following error: checking for assert... no checking for the toolset name used by Boost for g++... gcc41 -gcc configure: Detected BOOST_ROOT=/users/andrey/3rdParty/boost_1_47/, but overridden by --with-boost=no checking for Boost headers version >= 1.39.0... no I've seen this question already, but it does not seem to help me. Any idea?
|
Cannot compile gearman - configure script fails My system is Red Hat Enterprise Linux Server release 5.7 (Tikanga). I am trying to run the configure script , and I am getting the following error: checking for the toolset name used by Boost for g++... gcc41 -gcc configure: Detected BOOST_ROOT; continuing with --with-boost=/raid/users/andrey/3rdParty/boost_1_47/ checking for Boost headers version >= 1.39.0... /users/andrey/3rdParty/boost_1_47/ checking for Boost's header version... 1_47 checking boost/program_options.hpp usability... no checking boost/program_options.hpp presence... no checking for boost/program_options.hpp... no configure: error: cannot find boost/program_options.hpp The documentation of configure says that boost is an optional package. So I tried to build it without boost: configure -with-boost=no This does not run as well and returns the following error: checking for assert... no checking for the toolset name used by Boost for g++... gcc41 -gcc configure: Detected BOOST_ROOT=/users/andrey/3rdParty/boost_1_47/, but overridden by --with-boost=no checking for Boost headers version >= 1.39.0... no I've seen this question already, but it does not seem to help me. Any idea?
|
compilation, redhat, configure, gearman
| 2
| 3,467
| 4
|
https://stackoverflow.com/questions/16334915/cannot-compile-gearman-configure-script-fails
|
15,280,393
|
how to modify standalone-full-ha.xml in jboss
|
I am modifying the standalone-full-ha.xml file present in standalone/configuration directory in JBoss EAP 6.0.1, but when I am restarting my application server, the standalone-full-ha.xml changes are getting reverted back to the previous state. What is the procedure of modifying the standalone-full-ha.xml file ?? Do I need to change any other configuration before modifying this file ?? Please advice. Thanks in advance.
|
how to modify standalone-full-ha.xml in jboss I am modifying the standalone-full-ha.xml file present in standalone/configuration directory in JBoss EAP 6.0.1, but when I am restarting my application server, the standalone-full-ha.xml changes are getting reverted back to the previous state. What is the procedure of modifying the standalone-full-ha.xml file ?? Do I need to change any other configuration before modifying this file ?? Please advice. Thanks in advance.
|
java, xml, jboss7.x, redhat
| 2
| 4,101
| 1
|
https://stackoverflow.com/questions/15280393/how-to-modify-standalone-full-ha-xml-in-jboss
|
14,970,273
|
FIFO to grep to file
|
I am using a named pipe to trap syslog messages. I can then easily view syslog by doing something like cat /var/log/local3.pipe | grep somefilter or grep somefilter /var/log/local3.pipe These both output the syslogs to the console very nicely. However, if I then want to capture that to a file I get nothing, eg cat /var/log/local3.pipe | grep somefilter >> somefile.log or grep somefilter /var/log/local3.pipe >> somefile.log The file always remains as zero bytes. Does anyone know why? I'm using Red Hat Enterprise Linux 5. Thanks. Additional info: For anyone who wants to reproduce this here's the full list of commands su <enter root password> mkfifo /var/log/local3.pipe chmod 644 /var/log/local3.pipe echo "local3.* |/var/log/local3.pipe" >> /etc/syslog.conf /etc/init.d/syslog restart exit then with one ssh session: cat /var/log/local3.pipe and in a second ssh session ("Test it" should show in first ssh session logger -p local3.info "Test it" then in the first session change it to cat /var/log/local3.pipe >> somefile.log send some more logs to local 3 (message needs to be different). Confirm that messages are going into somefile.log logger -p local3.info "Test it 2" then in the first session change it to cat /var/log/local3.pipe | grep -i test >> somefile.log now confirm that logs are not going to somefile.log Note that the message needs to be different from the last message otherwise the logger doesn't send it immediately.
|
FIFO to grep to file I am using a named pipe to trap syslog messages. I can then easily view syslog by doing something like cat /var/log/local3.pipe | grep somefilter or grep somefilter /var/log/local3.pipe These both output the syslogs to the console very nicely. However, if I then want to capture that to a file I get nothing, eg cat /var/log/local3.pipe | grep somefilter >> somefile.log or grep somefilter /var/log/local3.pipe >> somefile.log The file always remains as zero bytes. Does anyone know why? I'm using Red Hat Enterprise Linux 5. Thanks. Additional info: For anyone who wants to reproduce this here's the full list of commands su <enter root password> mkfifo /var/log/local3.pipe chmod 644 /var/log/local3.pipe echo "local3.* |/var/log/local3.pipe" >> /etc/syslog.conf /etc/init.d/syslog restart exit then with one ssh session: cat /var/log/local3.pipe and in a second ssh session ("Test it" should show in first ssh session logger -p local3.info "Test it" then in the first session change it to cat /var/log/local3.pipe >> somefile.log send some more logs to local 3 (message needs to be different). Confirm that messages are going into somefile.log logger -p local3.info "Test it 2" then in the first session change it to cat /var/log/local3.pipe | grep -i test >> somefile.log now confirm that logs are not going to somefile.log Note that the message needs to be different from the last message otherwise the logger doesn't send it immediately.
|
linux, shell, redhat, fifo, rhel5
| 2
| 1,599
| 1
|
https://stackoverflow.com/questions/14970273/fifo-to-grep-to-file
|
14,847,887
|
yum command not working using Red Hat
|
I want to install java using yum command (Red Hat Enterprise Linux Version 5) but exception arises.. $ yum install java-1.6.0-openjdk Loading "rhnplugin" plugin Loading "security" plugin Loading "installonlyn" plugin **This system is not registered with RHN.** RHN support will be disabled. plz help i m new for linux..
|
yum command not working using Red Hat I want to install java using yum command (Red Hat Enterprise Linux Version 5) but exception arises.. $ yum install java-1.6.0-openjdk Loading "rhnplugin" plugin Loading "security" plugin Loading "installonlyn" plugin **This system is not registered with RHN.** RHN support will be disabled. plz help i m new for linux..
|
java, redhat
| 2
| 7,449
| 1
|
https://stackoverflow.com/questions/14847887/yum-command-not-working-using-red-hat
|
13,568,970
|
robots.txt - disallow page without querystring
|
I have a page that serves up dynamic content /for-sale the page should always have at least one parameter /for-sale?id=1 I'd like to disallow /for-sale but allow /for-sale?id=* without affecting the bot's ability to crawl the site or the possibility of affecting negatively on SERP's. Is this possible?
|
robots.txt - disallow page without querystring I have a page that serves up dynamic content /for-sale the page should always have at least one parameter /for-sale?id=1 I'd like to disallow /for-sale but allow /for-sale?id=* without affecting the bot's ability to crawl the site or the possibility of affecting negatively on SERP's. Is this possible?
|
linux, seo, lamp, redhat, robots.txt
| 2
| 442
| 2
|
https://stackoverflow.com/questions/13568970/robots-txt-disallow-page-without-querystring
|
11,485,964
|
Compiling C++ Program with MYSQL.h in Linux
|
I'm following the example from here and my code is identical. When I type mysql_config --libs and mysql_config --cflags into the console as he explains, I get the same output as he shows. Yet, when I try to compile using g++ -o output-file $(mysql_config --cflags) test.cpp $(mysql_config --libs) I get the errors: test.cpp:3:25: error: mysql.h: No such file or directory test.cpp: In function âint main()â: test.cpp:6: error: âMYSQLâ was not declared in this scope test.cpp:6: error: âconnâ was not declared in this scope test.cpp:7: error: âMYSQL_RESâ was not declared in this scope test.cpp:7: error: âresâ was not declared in this scope test.cpp:8: error: âMYSQL_ROWâ was not declared in this scope test.cpp:8: error: expected `;' before ârowâ test.cpp:13: error: âmysql_initâ was not declared in this scope test.cpp:17: error: âmysql_real_connectâ was not declared in this scope test.cpp:18: error: âmysql_errorâ was not declared in this scope test.cpp:19: error: âexitâ was not declared in this scope test.cpp:22: error: âmysql_queryâ was not declared in this scope test.cpp:23: error: âmysql_errorâ was not declared in this scope test.cpp:24: error: âexitâ was not declared in this scope test.cpp:27: error: âmysql_use_resultâ was not declared in this scope test.cpp:31: error: ârowâ was not declared in this scope test.cpp:31: error: âmysql_fetch_rowâ was not declared in this scope test.cpp:35: error: âmysql_free_resultâ was not declared in this scope test.cpp:36: error: âmysql_closeâ was not declared in this scope When I try 'whereis mysql' it shows /usr/bin/mysql, /usr/lib/mysql and /usr/share/mysql, but I'm not sure where mysql.h is located exactly. The admin of the server I'm working on said he installed MySQL and I can indeed create/manipulate tables using phpMyAdmin. Also, please give me suggestions about this particular problem. I'm aware of C++ wrappers for MySQL but I'm trying to just use the C API for now. Thanks!
|
Compiling C++ Program with MYSQL.h in Linux I'm following the example from here and my code is identical. When I type mysql_config --libs and mysql_config --cflags into the console as he explains, I get the same output as he shows. Yet, when I try to compile using g++ -o output-file $(mysql_config --cflags) test.cpp $(mysql_config --libs) I get the errors: test.cpp:3:25: error: mysql.h: No such file or directory test.cpp: In function âint main()â: test.cpp:6: error: âMYSQLâ was not declared in this scope test.cpp:6: error: âconnâ was not declared in this scope test.cpp:7: error: âMYSQL_RESâ was not declared in this scope test.cpp:7: error: âresâ was not declared in this scope test.cpp:8: error: âMYSQL_ROWâ was not declared in this scope test.cpp:8: error: expected `;' before ârowâ test.cpp:13: error: âmysql_initâ was not declared in this scope test.cpp:17: error: âmysql_real_connectâ was not declared in this scope test.cpp:18: error: âmysql_errorâ was not declared in this scope test.cpp:19: error: âexitâ was not declared in this scope test.cpp:22: error: âmysql_queryâ was not declared in this scope test.cpp:23: error: âmysql_errorâ was not declared in this scope test.cpp:24: error: âexitâ was not declared in this scope test.cpp:27: error: âmysql_use_resultâ was not declared in this scope test.cpp:31: error: ârowâ was not declared in this scope test.cpp:31: error: âmysql_fetch_rowâ was not declared in this scope test.cpp:35: error: âmysql_free_resultâ was not declared in this scope test.cpp:36: error: âmysql_closeâ was not declared in this scope When I try 'whereis mysql' it shows /usr/bin/mysql, /usr/lib/mysql and /usr/share/mysql, but I'm not sure where mysql.h is located exactly. The admin of the server I'm working on said he installed MySQL and I can indeed create/manipulate tables using phpMyAdmin. Also, please give me suggestions about this particular problem. I'm aware of C++ wrappers for MySQL but I'm trying to just use the C API for now. Thanks!
|
c++, mysql, compiler-errors, redhat
| 2
| 3,144
| 2
|
https://stackoverflow.com/questions/11485964/compiling-c-program-with-mysql-h-in-linux
|
10,435,954
|
How can I trigger a 'yum clean all' from within a yum plugin?
|
I'm writing a yum plugin that updates the URLs of local repos. When the repo URL changes, I'd like to have yum run a yum clean all to make sure no out-of-date information is cached. I know yum has a hook for running code when yum clean [plugins|all] is requested but is it possible to trigger a clean all from within one of the plugin's other hook functions?
|
How can I trigger a 'yum clean all' from within a yum plugin? I'm writing a yum plugin that updates the URLs of local repos. When the repo URL changes, I'd like to have yum run a yum clean all to make sure no out-of-date information is cached. I know yum has a hook for running code when yum clean [plugins|all] is requested but is it possible to trigger a clean all from within one of the plugin's other hook functions?
|
centos, fedora, redhat, rpm, yum
| 2
| 861
| 1
|
https://stackoverflow.com/questions/10435954/how-can-i-trigger-a-yum-clean-all-from-within-a-yum-plugin
|
8,120,039
|
How do I build RPMS?
|
How do I build RPMS under Red Hat? I need to package a newer version of some software than is available from the repositories. (I can build it locally already, its just the packaging that I need to do, so that I can use it on other machines) I could just take the .spec file from the older version's SRPM and start from there, right? - But i'm brand new to packaging, any pointers?
|
How do I build RPMS? How do I build RPMS under Red Hat? I need to package a newer version of some software than is available from the repositories. (I can build it locally already, its just the packaging that I need to do, so that I can use it on other machines) I could just take the .spec file from the older version's SRPM and start from there, right? - But i'm brand new to packaging, any pointers?
|
build, repository, packaging, redhat, rpm
| 2
| 148
| 1
|
https://stackoverflow.com/questions/8120039/how-do-i-build-rpms
|
4,491,627
|
How do I tweak CSS on the Django 1.2.3 admin media?
|
I would like to make CSS-only adjustments to the admin interface (on an RHEL box I don't have sysadmin privileges to). To that end, I would like a local version of /media/ to tweak. [URL] (but not [URL] ) suggests running a manage.py collectstatic or manage.py findstatic, and my Django 1.2.3 manage.py does not recognize those commands. Adding 'django.contrib.staticfiles' to my INSTALLED_APPS also broke things (not found). I would like to customize the CSS, and the way I envision doing that is by getting a private copy of the media for Django's admin and changing from there. What are my best options for a Django 1.2.3 installation?
|
How do I tweak CSS on the Django 1.2.3 admin media? I would like to make CSS-only adjustments to the admin interface (on an RHEL box I don't have sysadmin privileges to). To that end, I would like a local version of /media/ to tweak. [URL] (but not [URL] ) suggests running a manage.py collectstatic or manage.py findstatic, and my Django 1.2.3 manage.py does not recognize those commands. Adding 'django.contrib.staticfiles' to my INSTALLED_APPS also broke things (not found). I would like to customize the CSS, and the way I envision doing that is by getting a private copy of the media for Django's admin and changing from there. What are my best options for a Django 1.2.3 installation?
|
django, django-admin, customization, redhat
| 2
| 871
| 2
|
https://stackoverflow.com/questions/4491627/how-do-i-tweak-css-on-the-django-1-2-3-admin-media
|
3,746,074
|
What's the Cygwin/Red Hat equivalent to Debian's manpages-dev, manpages-posix-dev?
|
I'm using Cygwin, and just discovered to my dismay that the package naming scheme is derived from Red Hat. I need the development man pages, called manpages-dev and manpages-posix-dev on Debian-based distros, and can't locate the Cygwin/RH equivalents. What are they? If they're not available, what's the canonical documentation to use in their place for Cygwin?
|
What's the Cygwin/Red Hat equivalent to Debian's manpages-dev, manpages-posix-dev? I'm using Cygwin, and just discovered to my dismay that the package naming scheme is derived from Red Hat. I need the development man pages, called manpages-dev and manpages-posix-dev on Debian-based distros, and can't locate the Cygwin/RH equivalents. What are they? If they're not available, what's the canonical documentation to use in their place for Cygwin?
|
ubuntu, cygwin, posix, redhat, manpage
| 2
| 1,173
| 2
|
https://stackoverflow.com/questions/3746074/whats-the-cygwin-red-hat-equivalent-to-debians-manpages-dev-manpages-posix-de
|
512,024
|
Connecting to MS SQL Server from PHP on Linux
|
I need to connect to a MS SQL Server on Windows from PHP running on Red Hat Enterprise Linux 4. I have installed FreeTDS and I can connect to the database using the tsql command. My current PHP does not have the mssql functions/extension. My question is, how do I set up the mssql extension without rebuilding PHP? Is there a prebuilt package for this? I have tried googling for this but I have had no luck.
|
Connecting to MS SQL Server from PHP on Linux I need to connect to a MS SQL Server on Windows from PHP running on Red Hat Enterprise Linux 4. I have installed FreeTDS and I can connect to the database using the tsql command. My current PHP does not have the mssql functions/extension. My question is, how do I set up the mssql extension without rebuilding PHP? Is there a prebuilt package for this? I have tried googling for this but I have had no luck.
|
php, sql-server, redhat, freetds
| 2
| 2,593
| 2
|
https://stackoverflow.com/questions/512024/connecting-to-ms-sql-server-from-php-on-linux
|
78,642,067
|
How to avoid adding word before matched line when word already exists
|
I created the following sed in order to add the python command exit() before the line LIST_ALL_SELECT_TOOL_PACKAGES_CMD in the file yumrpm.py sed -i '0,/LIST_ALL_SELECT_TOOL_PACKAGES_CMD/s//exit()\n&/' yumrpm.py Example (after we run the above sed syntax) exit() LIST_ALL_SELECT_TOOL_PACKAGES_CMD = "yum list all" The problem is when we run the sed again and in this case we get an additional exit() . Example: exit() exit() LIST_ALL_SELECT_TOOL_PACKAGES_CMD = "yum list all" What do I need to add in the sed command in order to avoid adding exit() when exit() already exists before the line LIST_ALL_SELECT_TOOL_PACKAGES_CMD ? NOTE - the solution should be inside sed , we dont want to add grep before sed in order to verify if exit() exists
|
How to avoid adding word before matched line when word already exists I created the following sed in order to add the python command exit() before the line LIST_ALL_SELECT_TOOL_PACKAGES_CMD in the file yumrpm.py sed -i '0,/LIST_ALL_SELECT_TOOL_PACKAGES_CMD/s//exit()\n&/' yumrpm.py Example (after we run the above sed syntax) exit() LIST_ALL_SELECT_TOOL_PACKAGES_CMD = "yum list all" The problem is when we run the sed again and in this case we get an additional exit() . Example: exit() exit() LIST_ALL_SELECT_TOOL_PACKAGES_CMD = "yum list all" What do I need to add in the sed command in order to avoid adding exit() when exit() already exists before the line LIST_ALL_SELECT_TOOL_PACKAGES_CMD ? NOTE - the solution should be inside sed , we dont want to add grep before sed in order to verify if exit() exists
|
sed, redhat
| 2
| 58
| 2
|
https://stackoverflow.com/questions/78642067/how-to-avoid-adding-word-before-matched-line-when-word-already-exists
|
73,597,789
|
How to install ffmpeg on UBI docker images?
|
I'm looking for a simple way to install ffmpeg in a UBI8 (ubi-minimal) docker image. I tried running in the dockerfile the following: RUN microdnf upgrade RUN microdnf install ffmpeg And I'm getting: ------ > [7/8] RUN microdnf install ffmpeg: #11 0.375 #11 0.375 (microdnf:1): librhsm-WARNING **: 07:58:19.229: Found 0 entitlement certificates #11 0.375 #11 0.375 (microdnf:1): librhsm-WARNING **: 07:58:19.230: Found 0 entitlement certificates #11 0.519 error: No package matches 'ffmpeg' ------ executor failed running [/bin/sh -c microdnf install ffmpeg]: exit code: 1 How can ffmpeg be easily installed on UBI 8 ? Note: I tried referring to numerous references on the web that explain how that may be done, such as this one and this as well, but UBI seems to be working differently.
|
How to install ffmpeg on UBI docker images? I'm looking for a simple way to install ffmpeg in a UBI8 (ubi-minimal) docker image. I tried running in the dockerfile the following: RUN microdnf upgrade RUN microdnf install ffmpeg And I'm getting: ------ > [7/8] RUN microdnf install ffmpeg: #11 0.375 #11 0.375 (microdnf:1): librhsm-WARNING **: 07:58:19.229: Found 0 entitlement certificates #11 0.375 #11 0.375 (microdnf:1): librhsm-WARNING **: 07:58:19.230: Found 0 entitlement certificates #11 0.519 error: No package matches 'ffmpeg' ------ executor failed running [/bin/sh -c microdnf install ffmpeg]: exit code: 1 How can ffmpeg be easily installed on UBI 8 ? Note: I tried referring to numerous references on the web that explain how that may be done, such as this one and this as well, but UBI seems to be working differently.
|
docker, ffmpeg, redhat, dnf, ubi
| 2
| 2,039
| 2
|
https://stackoverflow.com/questions/73597789/how-to-install-ffmpeg-on-ubi-docker-images
|
73,277,759
|
Unable to install Apache Cassandra 4.0.5 on Red Hat 7 / CentOS 7
|
I'm trying to install Apache Cassandra on Red Hat 7 via yum as described here [URL] . The installation process was successful with version 4.0.3. However, with the latest version 4.0.5 the following error message is returned during the installation process Error: Invalid version flag: or . The or operator was added to the Apache Cassandra configuration with [URL] . From my understanding the or operator was introduced with the RPM version 4.13 but Red Hat 7 ships with 4.11.3. Is there any other solution than upgrading to a new Red Hat version?
|
Unable to install Apache Cassandra 4.0.5 on Red Hat 7 / CentOS 7 I'm trying to install Apache Cassandra on Red Hat 7 via yum as described here [URL] . The installation process was successful with version 4.0.3. However, with the latest version 4.0.5 the following error message is returned during the installation process Error: Invalid version flag: or . The or operator was added to the Apache Cassandra configuration with [URL] . From my understanding the or operator was introduced with the RPM version 4.13 but Red Hat 7 ships with 4.11.3. Is there any other solution than upgrading to a new Red Hat version?
|
cassandra, centos7, redhat
| 2
| 4,189
| 2
|
https://stackoverflow.com/questions/73277759/unable-to-install-apache-cassandra-4-0-5-on-red-hat-7-centos-7
|
71,498,169
|
Unable to get array element from JSON file using Ansible 2.10 version on RedHat
|
Below is my JSON file [ { "?xml": { "attributes": { "encoding": "UTF-8", "version": "1.0" } } }, { "domain": [ { "name": "mydom" }, { "domain-version": "12.2.1.3.0" }, { "server": [ { "name": "AdminServer" }, { "ssl": { "name": "AdminServer" } }, { "listen-port": "12400" }, { "listen-address": "mydom.host1.bank.com" } ] }, { "server": [ { "name": "myserv1" }, { "ssl": [ { "name": "myserv1" }, { "login-timeout-millis": "25000" } ] }, { "listen-port": "22421" } ] }, { "server": [ { "name": "myserv2" }, { "ssl": { "name": "myserv2" } }, { "reverse-dns-allowed": "false" }, { "log": [ { "name": "myserv2" }, { "file-name": "/web/bea_logs/domains/mydom/myserv2/myserv2.log" } ] }, { "listen-port": "12401" } ] } ] } ] I wish to get the listen-port printed while keeping in mind that the position of listen-port element may change in the array. I was able to get the listen port on latest ansible version 2.12.2 using the below play - name: display Listen Port debug: msg: "{{ myserver.0.name }} -> {{ cpath[0]['listen-port'] }}" loop: "{{ jsondata[1].domain }}" vars: myserver: "{{ item.server | selectattr('name', 'defined') | list }}" cpath: "{{ item.server | selectattr('listen-port', 'defined') | list }}" when: item.server is defined and (item.server | selectattr('listen-port', 'defined') | list ) != [] However, this play does not work on redhat OS where ansible version is 2.10 the latest offering. Below is the error i recieve: TASK [create YML for server name with Listen port] ************************************************************ Wednesday 16 March 2022 08:41:06 -0500 (0:00:00.171) 0:00:05.917 ******* skipping: [localhost] => (item={'name': 'mydom'}) skipping: [localhost] => (item={'domain-version': '12.2.1.3.0'}) failed: [localhost] (item={'server': [{'name': 'AdminServer'}, {'ssl': {'name': 'AdminServer'}}, {'listen-port': '12400'}, {'listen-address': 'mydom.host1.bank.com'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": "echo AdminServer_httpport: 12400>>/web/aes/admin/playbooks/Migrator/wlsdatadump.yml", "delta": "0:00:00.006786", "end": "2022-03-16 08:41:07.175709", "item": {"server": [{"name": "AdminServer"}, {"ssl": {"name": "AdminServer"}}, {"listen-port": "12400"}, {"listen-address": "mydom.host1.bank.com"}]}, "msg": "non-zero return code", "rc": 1, "start": "2022-03-16 08:41:07.168923", "stderr": "/bin/sh: 12400: Bad file descriptor", "stderr_lines": ["/bin/sh: 12400: Bad file descriptor"], "stdout": "", "stdout_lines": []} skipping: [localhost] => (item={'server': [{'name': 'myserv1'}, {'ssl': [{'name': 'myserv1'}, {'login-timeout-millis': '25000'}]}, {'log': [{'name': 'myserv1'}, {'file-name': '/web/bea_logs/domains/mydom/myserv1/myserv1.log'}]}]}) skipping: [localhost] => (item={'server': [{'name': 'myserv2'}, {'ssl': {'name': 'myserv2'}}, {'reverse-dns-allowed': 'false'}, {'log': [{'name': 'myserv2'}, {'file-name': '/web/bea_logs/domains/mydom/myserv2/myserv2.log'}]}]}) Can you please suggest any other solution?
|
Unable to get array element from JSON file using Ansible 2.10 version on RedHat Below is my JSON file [ { "?xml": { "attributes": { "encoding": "UTF-8", "version": "1.0" } } }, { "domain": [ { "name": "mydom" }, { "domain-version": "12.2.1.3.0" }, { "server": [ { "name": "AdminServer" }, { "ssl": { "name": "AdminServer" } }, { "listen-port": "12400" }, { "listen-address": "mydom.host1.bank.com" } ] }, { "server": [ { "name": "myserv1" }, { "ssl": [ { "name": "myserv1" }, { "login-timeout-millis": "25000" } ] }, { "listen-port": "22421" } ] }, { "server": [ { "name": "myserv2" }, { "ssl": { "name": "myserv2" } }, { "reverse-dns-allowed": "false" }, { "log": [ { "name": "myserv2" }, { "file-name": "/web/bea_logs/domains/mydom/myserv2/myserv2.log" } ] }, { "listen-port": "12401" } ] } ] } ] I wish to get the listen-port printed while keeping in mind that the position of listen-port element may change in the array. I was able to get the listen port on latest ansible version 2.12.2 using the below play - name: display Listen Port debug: msg: "{{ myserver.0.name }} -> {{ cpath[0]['listen-port'] }}" loop: "{{ jsondata[1].domain }}" vars: myserver: "{{ item.server | selectattr('name', 'defined') | list }}" cpath: "{{ item.server | selectattr('listen-port', 'defined') | list }}" when: item.server is defined and (item.server | selectattr('listen-port', 'defined') | list ) != [] However, this play does not work on redhat OS where ansible version is 2.10 the latest offering. Below is the error i recieve: TASK [create YML for server name with Listen port] ************************************************************ Wednesday 16 March 2022 08:41:06 -0500 (0:00:00.171) 0:00:05.917 ******* skipping: [localhost] => (item={'name': 'mydom'}) skipping: [localhost] => (item={'domain-version': '12.2.1.3.0'}) failed: [localhost] (item={'server': [{'name': 'AdminServer'}, {'ssl': {'name': 'AdminServer'}}, {'listen-port': '12400'}, {'listen-address': 'mydom.host1.bank.com'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": "echo AdminServer_httpport: 12400>>/web/aes/admin/playbooks/Migrator/wlsdatadump.yml", "delta": "0:00:00.006786", "end": "2022-03-16 08:41:07.175709", "item": {"server": [{"name": "AdminServer"}, {"ssl": {"name": "AdminServer"}}, {"listen-port": "12400"}, {"listen-address": "mydom.host1.bank.com"}]}, "msg": "non-zero return code", "rc": 1, "start": "2022-03-16 08:41:07.168923", "stderr": "/bin/sh: 12400: Bad file descriptor", "stderr_lines": ["/bin/sh: 12400: Bad file descriptor"], "stdout": "", "stdout_lines": []} skipping: [localhost] => (item={'server': [{'name': 'myserv1'}, {'ssl': [{'name': 'myserv1'}, {'login-timeout-millis': '25000'}]}, {'log': [{'name': 'myserv1'}, {'file-name': '/web/bea_logs/domains/mydom/myserv1/myserv1.log'}]}]}) skipping: [localhost] => (item={'server': [{'name': 'myserv2'}, {'ssl': {'name': 'myserv2'}}, {'reverse-dns-allowed': 'false'}, {'log': [{'name': 'myserv2'}, {'file-name': '/web/bea_logs/domains/mydom/myserv2/myserv2.log'}]}]}) Can you please suggest any other solution?
|
arrays, json, ansible, runtime-error, redhat
| 2
| 90
| 1
|
https://stackoverflow.com/questions/71498169/unable-to-get-array-element-from-json-file-using-ansible-2-10-version-on-redhat
|
70,561,386
|
How to include special characters in the 'when' module?
|
The variable error_code below contains this string: "failed": true How can I use this string as the trigger for the 'when module? I am not sure how to escape these special characters so the playbook interprets them correctly. Here's what I have tried but it is not working: - name: copying index copy: src: /tmp/index.html dest: /var/www/html/ notify: reloadone register: error_code - name: verify content fail: msg: There has been an error with the index file when: " \"failed\"\: true in error_code" handlers: - name: reloadone systemd: state: restarted name: httpd
|
How to include special characters in the 'when' module? The variable error_code below contains this string: "failed": true How can I use this string as the trigger for the 'when module? I am not sure how to escape these special characters so the playbook interprets them correctly. Here's what I have tried but it is not working: - name: copying index copy: src: /tmp/index.html dest: /var/www/html/ notify: reloadone register: error_code - name: verify content fail: msg: There has been an error with the index file when: " \"failed\"\: true in error_code" handlers: - name: reloadone systemd: state: restarted name: httpd
|
linux, ansible, redhat
| 2
| 606
| 1
|
https://stackoverflow.com/questions/70561386/how-to-include-special-characters-in-the-when-module
|
69,312,875
|
I cannot install Amazon Inspector
|
When I executed the "Run Command" with the "AmazonInspector-ManageAWSAgent " Document, The output gives me this error: Failed to find an inspector agent package for this OS:ol-5.4.. The OS version of the server is Oracle Linux Server (based from Red Hat Linux 7.9) How can I upgrade the "OS:ol-5.4 to 7"?
|
I cannot install Amazon Inspector When I executed the "Run Command" with the "AmazonInspector-ManageAWSAgent " Document, The output gives me this error: Failed to find an inspector agent package for this OS:ol-5.4.. The OS version of the server is Oracle Linux Server (based from Red Hat Linux 7.9) How can I upgrade the "OS:ol-5.4 to 7"?
|
amazon-web-services, redhat, oraclelinux, amazon-inspector
| 2
| 1,124
| 1
|
https://stackoverflow.com/questions/69312875/i-cannot-install-amazon-inspector
|
68,468,884
|
How to zip multiple folders separately in linux
|
Below mentioned folders contain some data. I need to zip all the folders separately. ItembankUpdate-20210602-NGSS-1 ItembankUpdate-20210602-NGSS-4 ItembankUpdate-20210602-NGSS-7 ItembankUpdate-20210602-NGSS-3 ItembankUpdate-20210602-NGSS-5 ItembankUpdate-20210602-NGSS-8 ItembankUpdate-20210602-NGSS-2 ItembankUpdate-20210602-NGSS-6 With this Command, I can zip only one folder zip -r ItembankUpdate-20210602-NGSS-3.zip ItembankUpdate-20210602-NGSS-3 How can I zip all the folders separately at once?
|
How to zip multiple folders separately in linux Below mentioned folders contain some data. I need to zip all the folders separately. ItembankUpdate-20210602-NGSS-1 ItembankUpdate-20210602-NGSS-4 ItembankUpdate-20210602-NGSS-7 ItembankUpdate-20210602-NGSS-3 ItembankUpdate-20210602-NGSS-5 ItembankUpdate-20210602-NGSS-8 ItembankUpdate-20210602-NGSS-2 ItembankUpdate-20210602-NGSS-6 With this Command, I can zip only one folder zip -r ItembankUpdate-20210602-NGSS-3.zip ItembankUpdate-20210602-NGSS-3 How can I zip all the folders separately at once?
|
linux, shell, zip, gzip, redhat
| 2
| 1,937
| 1
|
https://stackoverflow.com/questions/68468884/how-to-zip-multiple-folders-separately-in-linux
|
65,678,380
|
How to upgrade R on CentOS
|
I am running version 3.6 of R on CentOS (CentOS Linux release 7.9.2009 (Core)). Is there a simple way to upgrade R to the latest version and upgrade all the installed libraries? EPEL repository on my machine is epel-release-7-13.noarch and by default, it installs version 3.6 of R. I was referring to the following post -[URL] but the package installr is only available for Windows. There is another post ( How to upgrade R in ubuntu? ) that describes how to upgrade R on Ubuntu, however some of these commands do not work on CentOS. I am sure there must be a painless way to upgrade R in CentOS.
|
How to upgrade R on CentOS I am running version 3.6 of R on CentOS (CentOS Linux release 7.9.2009 (Core)). Is there a simple way to upgrade R to the latest version and upgrade all the installed libraries? EPEL repository on my machine is epel-release-7-13.noarch and by default, it installs version 3.6 of R. I was referring to the following post -[URL] but the package installr is only available for Windows. There is another post ( How to upgrade R in ubuntu? ) that describes how to upgrade R on Ubuntu, however some of these commands do not work on CentOS. I am sure there must be a painless way to upgrade R in CentOS.
|
r, linux, centos, redhat
| 2
| 4,165
| 1
|
https://stackoverflow.com/questions/65678380/how-to-upgrade-r-on-centos
|
64,790,738
|
JBoss EAP 7.0 java.lang.IllegalStateException: Unknown tag! pos=3 poolCount = 20 WARN
|
I have a SpringBoot 2.2.6 Web Application and I want to run it under JBoss EAP 7 . I manage to start the server but from the log I can see many warning about several classes. These warnings are all simiral to the follow one: WARN [org.jboss.as.server.deployment] (MSC service thread 1-1) WFLYSRV0003: Could not index class module-info.class at /C:/server/jboss-eap-7.0/bin/content/TEST-EAR.ear/WEB-TEST.war/WEB-INF/lib/lombok-1.18.12.jar: java.lang.IllegalStateException: Unknown tag! pos=3 poolCount = 44 The classes involved are: classmate-1.5.1.jar jackson-annotations-2.10.3.jar jackson-core-2.10.3.jar jackson-databind-2.10.3.jar jackson-datatype-jdk8-2.10.3.jar jackson-datatype-jsr310-2.10.3.jar jackson-module-parameter-names-2.10.3.jar lombok-1.18.12.jar Except for lombok, the other libraries come with spring-boot-starter-web dependency. Googlin around I read that problem is the libraries version.. but I hope there's another way to solve this WARN (is not a proper issue because the server starts) because exclude all these libraries from spring artifact and then re-import another version of them seems really an overkill to me.. Thank you
|
JBoss EAP 7.0 java.lang.IllegalStateException: Unknown tag! pos=3 poolCount = 20 WARN I have a SpringBoot 2.2.6 Web Application and I want to run it under JBoss EAP 7 . I manage to start the server but from the log I can see many warning about several classes. These warnings are all simiral to the follow one: WARN [org.jboss.as.server.deployment] (MSC service thread 1-1) WFLYSRV0003: Could not index class module-info.class at /C:/server/jboss-eap-7.0/bin/content/TEST-EAR.ear/WEB-TEST.war/WEB-INF/lib/lombok-1.18.12.jar: java.lang.IllegalStateException: Unknown tag! pos=3 poolCount = 44 The classes involved are: classmate-1.5.1.jar jackson-annotations-2.10.3.jar jackson-core-2.10.3.jar jackson-databind-2.10.3.jar jackson-datatype-jdk8-2.10.3.jar jackson-datatype-jsr310-2.10.3.jar jackson-module-parameter-names-2.10.3.jar lombok-1.18.12.jar Except for lombok, the other libraries come with spring-boot-starter-web dependency. Googlin around I read that problem is the libraries version.. but I hope there's another way to solve this WARN (is not a proper issue because the server starts) because exclude all these libraries from spring artifact and then re-import another version of them seems really an overkill to me.. Thank you
|
spring-boot, jboss, redhat
| 2
| 11,312
| 1
|
https://stackoverflow.com/questions/64790738/jboss-eap-7-0-java-lang-illegalstateexception-unknown-tag-pos-3-poolcount-20
|
61,258,996
|
Keycloak: How to import service accounts with client roles
|
I've been trying to import pre-configured clients and service accounts with roles so my json file looks something like [ "realm": "dev", "users": [ { "username": "service-account-example-client", "enabled": true, "serviceAccountClientId": "example-client", "clientRoles": { "realm-management": ["view-users"], "example-client": ["view-users"] } } ] ] Also tried to set clients in realm configuration which gets imported but in both cases I have the following issue Service accounts created, Client has role in roles list, But client role for in "Service account roles" is not set. How to import the service account roles with assigned client roles during setup process when REST API is not available yet? Also using import export from the UI strips out some configurations. Keycloak version is: 8.0.0 Thanks.
|
Keycloak: How to import service accounts with client roles I've been trying to import pre-configured clients and service accounts with roles so my json file looks something like [ "realm": "dev", "users": [ { "username": "service-account-example-client", "enabled": true, "serviceAccountClientId": "example-client", "clientRoles": { "realm-management": ["view-users"], "example-client": ["view-users"] } } ] ] Also tried to set clients in realm configuration which gets imported but in both cases I have the following issue Service accounts created, Client has role in roles list, But client role for in "Service account roles" is not set. How to import the service account roles with assigned client roles during setup process when REST API is not available yet? Also using import export from the UI strips out some configurations. Keycloak version is: 8.0.0 Thanks.
|
jboss, redhat, keycloak, service-accounts
| 2
| 6,161
| 2
|
https://stackoverflow.com/questions/61258996/keycloak-how-to-import-service-accounts-with-client-roles
|
56,526,883
|
Searching string using grep and ignoring contents in between
|
I am using below command to search string ."66688." in all the files inside newfolder. This is working fine. grep --exclude=\*.{atr,out} -rnw '/tmp/newfolder' -e '."66688"' However I the number 66688 in between ." is not constant, neither the length of the number. Hence I want to modify this command to grep file ."WHATEVER_IN_BETWEEN_DOESNT_MATTER" grep --exclude=\*.{atr,out} -rnw '/tmp/newfolder' -e '."66688"'
|
Searching string using grep and ignoring contents in between I am using below command to search string ."66688." in all the files inside newfolder. This is working fine. grep --exclude=\*.{atr,out} -rnw '/tmp/newfolder' -e '."66688"' However I the number 66688 in between ." is not constant, neither the length of the number. Hence I want to modify this command to grep file ."WHATEVER_IN_BETWEEN_DOESNT_MATTER" grep --exclude=\*.{atr,out} -rnw '/tmp/newfolder' -e '."66688"'
|
linux, shell, redhat
| 2
| 175
| 1
|
https://stackoverflow.com/questions/56526883/searching-string-using-grep-and-ignoring-contents-in-between
|
55,512,727
|
Can I redirect puppet agent output to a different log file?
|
I have a RHEL 6.10 node on which I installed the Puppet agent (version 5.3.5). The output of the Puppet run is currently logged in /var/log/messages. However I want to redirect this logging to a different file (ex. /var/log/puppet/puppet.log) to make things more clear. I already looked in /etc/sysconfig/puppet but the only things listed in there is this: # You may specify parameters to the puppet client here #PUPPET_EXTRA_OPTS=--waitforcert=500 I already tried adding this to the config: # Where to log to. Specify syslog to send log messages to the system log. PUPPET_LOG=/var/log/puppet/puppet.log And then restarted the Puppet service but this doesn't seem to work. Can anyone tell me how to do this and if this is even possible on RH 6.10?
|
Can I redirect puppet agent output to a different log file? I have a RHEL 6.10 node on which I installed the Puppet agent (version 5.3.5). The output of the Puppet run is currently logged in /var/log/messages. However I want to redirect this logging to a different file (ex. /var/log/puppet/puppet.log) to make things more clear. I already looked in /etc/sysconfig/puppet but the only things listed in there is this: # You may specify parameters to the puppet client here #PUPPET_EXTRA_OPTS=--waitforcert=500 I already tried adding this to the config: # Where to log to. Specify syslog to send log messages to the system log. PUPPET_LOG=/var/log/puppet/puppet.log And then restarted the Puppet service but this doesn't seem to work. Can anyone tell me how to do this and if this is even possible on RH 6.10?
|
logging, puppet, redhat
| 2
| 898
| 1
|
https://stackoverflow.com/questions/55512727/can-i-redirect-puppet-agent-output-to-a-different-log-file
|
53,230,005
|
Finding AMI ids for Redhat images on AWS
|
Is there exists any way for finding ami-ids of specific version of Fedora? For example, 6.3, 6.4 and so on?
|
Finding AMI ids for Redhat images on AWS Is there exists any way for finding ami-ids of specific version of Fedora? For example, 6.3, 6.4 and so on?
|
amazon-web-services, cloud, redhat, amazon-ami
| 2
| 1,112
| 1
|
https://stackoverflow.com/questions/53230005/finding-ami-ids-for-redhat-images-on-aws
|
52,603,713
|
loop through a directory and check if it exist in another directory
|
I have a folder with 20000 files in directory A and another folder with 15000 file in another directory B i can loop through a directory using: DIR='/home/oracle/test/forms1/' for FILE in "$DIR"*.mp do filedate=$( ls -l --time-style=+"date %d-%m-%Y_%H-%M" *.fmx |awk '{print $8 $7}') echo "file New Name $FILE$filedate " # echo "file New Name $FILE is copied " done I need to loop through all the files in directory A and check if they exist in directory B I tried the following but it doesn't seem to work: testdir='/home/oracle/ideatest/test/' livedir='/home/oracle/ideatest/live/' for FILET in "$testdir" # do testfile=$(ls $FILET) echo $testfile for FILEL in "$livedir" do livefile=$(ls $FILEL) if [ "$testfile" = "$livefile" ] then echo "$testfile" echo "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy" else echo "nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn" fi done done i'am trying to fix the result of years of bad version control we have that very oly script that send a form to live enviorment but every time it's compiled and sent the live version is named like (testform.fmx) but in test dir there is like 10 files named like (testform.fmx01-12-2018) (testform.fmx12-12-2017)(testform.fmx04-05-2016) as a reuslt we lost track of the last source sent to live enviroment that's why i created this filedate=$( ls -l --time-style=+"date %d-%m-%Y_%H-%M" *.fmx |awk '{print $8 $7}') echo "file New Name $FILE$filedate " to match the format and loop through each dir and using ls i can find the last version by matching the size and the year and month
|
loop through a directory and check if it exist in another directory I have a folder with 20000 files in directory A and another folder with 15000 file in another directory B i can loop through a directory using: DIR='/home/oracle/test/forms1/' for FILE in "$DIR"*.mp do filedate=$( ls -l --time-style=+"date %d-%m-%Y_%H-%M" *.fmx |awk '{print $8 $7}') echo "file New Name $FILE$filedate " # echo "file New Name $FILE is copied " done I need to loop through all the files in directory A and check if they exist in directory B I tried the following but it doesn't seem to work: testdir='/home/oracle/ideatest/test/' livedir='/home/oracle/ideatest/live/' for FILET in "$testdir" # do testfile=$(ls $FILET) echo $testfile for FILEL in "$livedir" do livefile=$(ls $FILEL) if [ "$testfile" = "$livefile" ] then echo "$testfile" echo "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy" else echo "nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn" fi done done i'am trying to fix the result of years of bad version control we have that very oly script that send a form to live enviorment but every time it's compiled and sent the live version is named like (testform.fmx) but in test dir there is like 10 files named like (testform.fmx01-12-2018) (testform.fmx12-12-2017)(testform.fmx04-05-2016) as a reuslt we lost track of the last source sent to live enviroment that's why i created this filedate=$( ls -l --time-style=+"date %d-%m-%Y_%H-%M" *.fmx |awk '{print $8 $7}') echo "file New Name $FILE$filedate " to match the format and loop through each dir and using ls i can find the last version by matching the size and the year and month
|
linux, bash, loops, if-statement, redhat
| 2
| 1,941
| 3
|
https://stackoverflow.com/questions/52603713/loop-through-a-directory-and-check-if-it-exist-in-another-directory
|
51,929,484
|
yum install failed i686 x86_64
|
my version is 6.10 oracle linux it is trying to istall both x86_64 and i686 packages i am trying to install glibc package manually as it fails during my puppet run with exact same error as below: yum install glibc-2.12-1.192.el6.i686 Loaded plugins: pulp-profile-update, security, ulninfo Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package glibc.i686 0:2.12-1.192.el6 will be installed --> Processing Dependency: glibc-common = 2.12-1.192.el6 for package: glibc-2.12-1.192.el6.i686 --> Processing Dependency: libfreebl3.so for package: glibc-2.12-1.192.el6.i686 --> Processing Dependency: libfreebl3.so(NSSRAWHASH_3.12.3) for package: glibc-2.12-1.192.el6.i686 --> Running transaction check ---> Package glibc.i686 0:2.12-1.192.el6 will be installed --> Processing Dependency: glibc-common = 2.12-1.192.el6 for package: glibc-2.12-1.192.el6.i686 ---> Package nss-softokn-freebl.i686 0:3.14.3-23.3.el6_8 will be installed --> Finished Dependency Resolution Error: Package: glibc-2.12-1.192.el6.i686 (nap_latest) Requires: glibc-common = 2.12-1.192.el6 Installed: glibc-common-2.12-1.212.0.1.el6.x86_64 (@OL6Latest-x86_64/6.9) glibc-common = 2.12-1.212.0.1.el6 Available: glibc-common-2.12-1.80.el6.x86_64 (nap_ol_base) glibc-common = 2.12-1.80.el6 Available: glibc-common-2.12-1.107.el6_4.5.x86_64 (nap_latest) glibc-common = 2.12-1.107.el6_4.5 Available: glibc-common-2.12-1.132.el6.x86_64 (nap_latest) glibc-common = 2.12-1.132.el6 Available: glibc-common-2.12-1.132.el6_5.2.x86_64 (nap_latest) glibc-common = 2.12-1.132.el6_5.2 Available: glibc-common-2.12-1.132.el6_5.4.x86_64 (nap_latest) glibc-common = 2.12-1.132.el6_5.4 Available: glibc-common-2.12-1.149.el6.x86_64 (nap_latest) glibc-common = 2.12-1.149.el6 Available: glibc-common-2.12-1.149.el6_6.5.x86_64 (nap_latest) glibc-common = 2.12-1.149.el6_6.5 Available: glibc-common-2.12-1.149.el6_6.9.x86_64 (nap_latest) glibc-common = 2.12-1.149.el6_6.9 Available: glibc-common-2.12-1.166.el6_7.3.x86_64 (nap_latest) glibc-common = 2.12-1.166.el6_7.3 Available: glibc-common-2.12-1.166.el6_7.7.x86_64 (nap_latest) glibc-common = 2.12-1.166.el6_7.7 Available: glibc-common-2.12-1.192.el6.x86_64 (nap_latest) glibc-common = 2.12-1.192.el6 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest any ideas? has anyone seen this previously?
|
yum install failed i686 x86_64 my version is 6.10 oracle linux it is trying to istall both x86_64 and i686 packages i am trying to install glibc package manually as it fails during my puppet run with exact same error as below: yum install glibc-2.12-1.192.el6.i686 Loaded plugins: pulp-profile-update, security, ulninfo Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package glibc.i686 0:2.12-1.192.el6 will be installed --> Processing Dependency: glibc-common = 2.12-1.192.el6 for package: glibc-2.12-1.192.el6.i686 --> Processing Dependency: libfreebl3.so for package: glibc-2.12-1.192.el6.i686 --> Processing Dependency: libfreebl3.so(NSSRAWHASH_3.12.3) for package: glibc-2.12-1.192.el6.i686 --> Running transaction check ---> Package glibc.i686 0:2.12-1.192.el6 will be installed --> Processing Dependency: glibc-common = 2.12-1.192.el6 for package: glibc-2.12-1.192.el6.i686 ---> Package nss-softokn-freebl.i686 0:3.14.3-23.3.el6_8 will be installed --> Finished Dependency Resolution Error: Package: glibc-2.12-1.192.el6.i686 (nap_latest) Requires: glibc-common = 2.12-1.192.el6 Installed: glibc-common-2.12-1.212.0.1.el6.x86_64 (@OL6Latest-x86_64/6.9) glibc-common = 2.12-1.212.0.1.el6 Available: glibc-common-2.12-1.80.el6.x86_64 (nap_ol_base) glibc-common = 2.12-1.80.el6 Available: glibc-common-2.12-1.107.el6_4.5.x86_64 (nap_latest) glibc-common = 2.12-1.107.el6_4.5 Available: glibc-common-2.12-1.132.el6.x86_64 (nap_latest) glibc-common = 2.12-1.132.el6 Available: glibc-common-2.12-1.132.el6_5.2.x86_64 (nap_latest) glibc-common = 2.12-1.132.el6_5.2 Available: glibc-common-2.12-1.132.el6_5.4.x86_64 (nap_latest) glibc-common = 2.12-1.132.el6_5.4 Available: glibc-common-2.12-1.149.el6.x86_64 (nap_latest) glibc-common = 2.12-1.149.el6 Available: glibc-common-2.12-1.149.el6_6.5.x86_64 (nap_latest) glibc-common = 2.12-1.149.el6_6.5 Available: glibc-common-2.12-1.149.el6_6.9.x86_64 (nap_latest) glibc-common = 2.12-1.149.el6_6.9 Available: glibc-common-2.12-1.166.el6_7.3.x86_64 (nap_latest) glibc-common = 2.12-1.166.el6_7.3 Available: glibc-common-2.12-1.166.el6_7.7.x86_64 (nap_latest) glibc-common = 2.12-1.166.el6_7.7 Available: glibc-common-2.12-1.192.el6.x86_64 (nap_latest) glibc-common = 2.12-1.192.el6 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest any ideas? has anyone seen this previously?
|
redhat, glibc, yum
| 2
| 5,861
| 2
|
https://stackoverflow.com/questions/51929484/yum-install-failed-i686-x86-64
|
51,388,219
|
Sed acting differently in vi than command line on the same server
|
I am looking for a filesystem name (or / in the first column), if there is a slash or that filesystem (doesn't matter which) I want to join the two lines by replacing without the cr. it works fine in vi. :%s:\n\/fs_name:\/fs_name:g using : as the delimiter for clarity I need to be able to replicate this in a script every time I run the job. So doing it in vi every time is not a solution in other words I want: first line /fs second line to become first line /fs second line using bash 4.2.46 on redhat 7. sed 4.2.46(s) and vim 7.4
|
Sed acting differently in vi than command line on the same server I am looking for a filesystem name (or / in the first column), if there is a slash or that filesystem (doesn't matter which) I want to join the two lines by replacing without the cr. it works fine in vi. :%s:\n\/fs_name:\/fs_name:g using : as the delimiter for clarity I need to be able to replicate this in a script every time I run the job. So doing it in vi every time is not a solution in other words I want: first line /fs second line to become first line /fs second line using bash 4.2.46 on redhat 7. sed 4.2.46(s) and vim 7.4
|
string, sed, replace, redhat
| 2
| 619
| 1
|
https://stackoverflow.com/questions/51388219/sed-acting-differently-in-vi-than-command-line-on-the-same-server
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.