question_id
int64
82.3k
79.7M
title_clean
stringlengths
15
158
body_clean
stringlengths
62
28.5k
full_text
stringlengths
95
28.5k
tags
stringlengths
4
80
score
int64
0
1.15k
view_count
int64
22
1.62M
answer_count
int64
0
30
link
stringlengths
58
125
7,556,794
Connecting Redhat to SQL Server 2008 for Ruby on Rails
I'm trying to connect Redhat Linux to a Microsoft SQL Server 2008. I already had trouble setting it up on windows (my test machine) but now I need to deploy it on the Linux machine where it will be in production. So I've installed unixODBC and FreeTDS (with a lot of effort, not even sure if it was installed correctly :S), and the outcome of that is that I have 3 files in /usr/local/etc : odbc.ini odbcinst.ini freetds.conf I then edited the freetds.conf file and this is what I added: [sqlServer] host = servername port = 4113 instance = sqlServer tds version = 8.0 client charset = UTF-8 I had to find out the port number from my DBA, as it is set to dynamic in SQL Server 2008. My odbcinst.ini file looks like this: [FreeTDS] Description = TDS driver (Sybase/MS SQL) Driver = /usr/local/lib/libtdsodbc.so Setup = /usr/local/lib/libtdsS.so CPTimeout = CPReuse = FileUsage = 1 and my odbc.ini files looks like this: [sqlServer] Driver = FreeTDS Description = ODBC connection via FreeTDS Trace = 1 Servername = sqlServer Database = RubyApp So now I tried connecting to see if there is any connection by using tsql -S sqlServer -U test -P test , however that only gives me the following error: locale is "en_US.UTF-8" locale charset is "UTF-8" using default charset "UTF-8" Error 20013 (severity 2): Unknown host machine name. There was a problem connecting to the server When I tried using isql, doing isql -v sqlServer test test , that spat out the following error: [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source [01000][unixODBC][FreeTDS][SQL Server]Unknown host machine name. [ISQL]ERROR: Could not SQLConnect Any ideas what I could be doing wrong?
Connecting Redhat to SQL Server 2008 for Ruby on Rails I'm trying to connect Redhat Linux to a Microsoft SQL Server 2008. I already had trouble setting it up on windows (my test machine) but now I need to deploy it on the Linux machine where it will be in production. So I've installed unixODBC and FreeTDS (with a lot of effort, not even sure if it was installed correctly :S), and the outcome of that is that I have 3 files in /usr/local/etc : odbc.ini odbcinst.ini freetds.conf I then edited the freetds.conf file and this is what I added: [sqlServer] host = servername port = 4113 instance = sqlServer tds version = 8.0 client charset = UTF-8 I had to find out the port number from my DBA, as it is set to dynamic in SQL Server 2008. My odbcinst.ini file looks like this: [FreeTDS] Description = TDS driver (Sybase/MS SQL) Driver = /usr/local/lib/libtdsodbc.so Setup = /usr/local/lib/libtdsS.so CPTimeout = CPReuse = FileUsage = 1 and my odbc.ini files looks like this: [sqlServer] Driver = FreeTDS Description = ODBC connection via FreeTDS Trace = 1 Servername = sqlServer Database = RubyApp So now I tried connecting to see if there is any connection by using tsql -S sqlServer -U test -P test , however that only gives me the following error: locale is "en_US.UTF-8" locale charset is "UTF-8" using default charset "UTF-8" Error 20013 (severity 2): Unknown host machine name. There was a problem connecting to the server When I tried using isql, doing isql -v sqlServer test test , that spat out the following error: [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source [01000][unixODBC][FreeTDS][SQL Server]Unknown host machine name. [ISQL]ERROR: Could not SQLConnect Any ideas what I could be doing wrong?
ruby-on-rails-3, sql-server-2008, redhat, freetds, unixodbc
5
4,848
2
https://stackoverflow.com/questions/7556794/connecting-redhat-to-sql-server-2008-for-ruby-on-rails
65,330,484
How to use Keycloak REST API for login using SMS OTP passwordless authentication?
I have modified some SMS OTP Authentication SPI from github and successfully used it for Keycloak Authentication. Then I made a custom flow for browser so that: Username-only form gets the username (May be the mobile number) Sends code to the user mobile Gets the code and authenticates the user The above works great! Now I need the same in REST API. When using documents, they say we have to set grant type. But the grant type is password in all examples. curl -L -X POST '[URL] \ -H 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'client_id=account' \ --data-urlencode 'grant_type=password' \ --data-urlencode 'client_secret=xxxxxxxxxxxxxxxxxxx' \ --data-urlencode 'scope=openid' \ --data-urlencode 'username=otp' BTW: I have added direct grant bindings same as form bindings (which works great) with no luck. How can I use REST API for login flow same as form authentication?
How to use Keycloak REST API for login using SMS OTP passwordless authentication? I have modified some SMS OTP Authentication SPI from github and successfully used it for Keycloak Authentication. Then I made a custom flow for browser so that: Username-only form gets the username (May be the mobile number) Sends code to the user mobile Gets the code and authenticates the user The above works great! Now I need the same in REST API. When using documents, they say we have to set grant type. But the grant type is password in all examples. curl -L -X POST '[URL] \ -H 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'client_id=account' \ --data-urlencode 'grant_type=password' \ --data-urlencode 'client_secret=xxxxxxxxxxxxxxxxxxx' \ --data-urlencode 'scope=openid' \ --data-urlencode 'username=otp' BTW: I have added direct grant bindings same as form bindings (which works great) with no luck. How can I use REST API for login flow same as form authentication?
java, keycloak, redhat, one-time-password
5
2,782
0
https://stackoverflow.com/questions/65330484/how-to-use-keycloak-rest-api-for-login-using-sms-otp-passwordless-authentication
51,925,698
Unable to login with local user in machine(RHEL7) after LDAP integartion
I am new with RedHat IDM. Below is my requirement. Please help. Suppose that we have two machines of RHEL7: Redhat IDM server Redhat IDM client machine Now we have created two users on IDM client machine with this process: Create the first user with a simple linux command useradd ravendra The second user we create using this IPA command: ipa user-add jsmith --first=John --last=Smith --manager=bjensen --email=johnls@example.com --homedir=/home/work/johns --password Now we have this requirement: If the IDM server is running then we want to restrict the ssh of user "ravendra" which is created through the normal linux command: Only "jsmith" can ssh to the IDM client machine If the IDM server is stopped, then both users can ssh to the IDM client machine. Can you recommend a plug-in and/or config I can use to achieve this? Thanks in advance
Unable to login with local user in machine(RHEL7) after LDAP integartion I am new with RedHat IDM. Below is my requirement. Please help. Suppose that we have two machines of RHEL7: Redhat IDM server Redhat IDM client machine Now we have created two users on IDM client machine with this process: Create the first user with a simple linux command useradd ravendra The second user we create using this IPA command: ipa user-add jsmith --first=John --last=Smith --manager=bjensen --email=johnls@example.com --homedir=/home/work/johns --password Now we have this requirement: If the IDM server is running then we want to restrict the ssh of user "ravendra" which is created through the normal linux command: Only "jsmith" can ssh to the IDM client machine If the IDM server is stopped, then both users can ssh to the IDM client machine. Can you recommend a plug-in and/or config I can use to achieve this? Thanks in advance
linux, ldap, redhat
5
552
1
https://stackoverflow.com/questions/51925698/unable-to-login-with-local-user-in-machinerhel7-after-ldap-integartion
51,269,510
timezone difference between environments
I have an app based on Spring, and it hosted on two environments: S1 and S2 - tomcat 8 instances on RedHat servers. The problem is, the time it perceives as "now" , for logging and persistence purposes, varies: it reports correct time on S1 it is one hour off on S2 Environment information: Red Hat Enterprise Linux Server 6.9 JVM Version: 1.8 -Duser.timezone=Europe/Berlin /etc/localtime is the same for both environments I've written simple debug code, the results differ between environments: System.out.println("[TimeZone]TimeZone ID: " + TimeZone.getDefault().getID()); System.out.println("[TimeZone]TimeZone name: " + TimeZone.getDefault().getDisplayName()); Date date = new Date(); LocalDateTime localDate = LocalDateTime.now(); DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss"); System.out.println("[Date]Date and time: " + df.format(date)); System.out.println("[LocalDateTime]Date and time: " + formatter.format(localDate)); df.setTimeZone(TimeZone.getTimeZone("Europe/Berlin")); ZonedDateTime zdt = localDate.atZone(ZoneId.of("Europe/Berlin")); System.out.println("[Date]Date and time in Berlin: " + df.format(date)); System.out.println("[ZonedDateTime]Date and time in Berlin: " + formatter.format(zdt)); Results: S1 [TimeZone]TimeZone ID: Europe/Berlin [TimeZone]TimeZone name: Central European Time [Date]Date and time: 2018-07-10 14:24:52 [LocalDateTime]Date and time: 2018-07-10 14:24:52 [Date]Date and time in Berlin: 2018-07-10 14:24:52 [ZonedDateTime]Date and time in Berlin: 2018-07-10 14:24:52 S2 [TimeZone]TimeZone ID: GMT+01:00 [TimeZone]TimeZone name: GMT+01:00 [Date]Date and time: 2018-07-10 13:29:18 [LocalDateTime]Date and time: 2018-07-10 13:29:18 [Date]Date and time in Berlin: 2018-07-10 14:29:18 [ZonedDateTime]Date and time in Berlin: 2018-07-10 13:29:18 some more diagonstic info: xxx@serv1:~ > ls -l /etc/localtime -rw-r--r--. 1 root root 2309 Apr 8 2016 /etc/localtime xxx@serv1:~ > zdump /etc/sysconfig/clock /etc/sysconfig/clock Fri Jul 20 09:21:01 2018 xxx@serv2:~ > ls -l /etc/localtime -rw-r--r--. 1 root root 2309 Feb 13 2014 /etc/localtime xxx@serv2:~ > zdump /etc/sysconfig/clock /etc/sysconfig/clock Fri Jul 20 09:20:47 2018 As you might have noticed, TimeZone is not loaded properly for app on S2 , despite setting the JVM property. So the question is, how can I correct the time for apps on S2 ? Or at least, how can I investigate this further?
timezone difference between environments I have an app based on Spring, and it hosted on two environments: S1 and S2 - tomcat 8 instances on RedHat servers. The problem is, the time it perceives as "now" , for logging and persistence purposes, varies: it reports correct time on S1 it is one hour off on S2 Environment information: Red Hat Enterprise Linux Server 6.9 JVM Version: 1.8 -Duser.timezone=Europe/Berlin /etc/localtime is the same for both environments I've written simple debug code, the results differ between environments: System.out.println("[TimeZone]TimeZone ID: " + TimeZone.getDefault().getID()); System.out.println("[TimeZone]TimeZone name: " + TimeZone.getDefault().getDisplayName()); Date date = new Date(); LocalDateTime localDate = LocalDateTime.now(); DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss"); System.out.println("[Date]Date and time: " + df.format(date)); System.out.println("[LocalDateTime]Date and time: " + formatter.format(localDate)); df.setTimeZone(TimeZone.getTimeZone("Europe/Berlin")); ZonedDateTime zdt = localDate.atZone(ZoneId.of("Europe/Berlin")); System.out.println("[Date]Date and time in Berlin: " + df.format(date)); System.out.println("[ZonedDateTime]Date and time in Berlin: " + formatter.format(zdt)); Results: S1 [TimeZone]TimeZone ID: Europe/Berlin [TimeZone]TimeZone name: Central European Time [Date]Date and time: 2018-07-10 14:24:52 [LocalDateTime]Date and time: 2018-07-10 14:24:52 [Date]Date and time in Berlin: 2018-07-10 14:24:52 [ZonedDateTime]Date and time in Berlin: 2018-07-10 14:24:52 S2 [TimeZone]TimeZone ID: GMT+01:00 [TimeZone]TimeZone name: GMT+01:00 [Date]Date and time: 2018-07-10 13:29:18 [LocalDateTime]Date and time: 2018-07-10 13:29:18 [Date]Date and time in Berlin: 2018-07-10 14:29:18 [ZonedDateTime]Date and time in Berlin: 2018-07-10 13:29:18 some more diagonstic info: xxx@serv1:~ > ls -l /etc/localtime -rw-r--r--. 1 root root 2309 Apr 8 2016 /etc/localtime xxx@serv1:~ > zdump /etc/sysconfig/clock /etc/sysconfig/clock Fri Jul 20 09:21:01 2018 xxx@serv2:~ > ls -l /etc/localtime -rw-r--r--. 1 root root 2309 Feb 13 2014 /etc/localtime xxx@serv2:~ > zdump /etc/sysconfig/clock /etc/sysconfig/clock Fri Jul 20 09:20:47 2018 As you might have noticed, TimeZone is not loaded properly for app on S2 , despite setting the JVM property. So the question is, how can I correct the time for apps on S2 ? Or at least, how can I investigate this further?
java, spring, datetime, redhat
5
320
1
https://stackoverflow.com/questions/51269510/timezone-difference-between-environments
34,330,895
Setting php ini settings in domain vhost.conf
For a certain domain I'm trying to specify php ini settings for include_path and open_basedir, but I can't get the settings to take effect. I'm using Red Hat Enterprise Linux Server 5.11 (Tikanga) and Plesk 11.0.9. I created the file /var/www/vhosts/[my domain]/conf/vhost.conf and added the following directives: <Directory /var/www/vhosts/[my domain]/web> <IfModule sapi_apache2.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/" php_admin_value include_path "." </IfModule> <IfModule mod_php5.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/" php_admin_value include_path "." </IfModule> Options -Includes -ExecCGI Then I reloaded configuration for the domain and issued a graceful restart: /usr/local/psa/admin/bin/httpdmng --reconfigure-domains [my domain] /usr/sbin/apachectl graceful According to the phpinfo issue from the document root the settings have not changed from those in the normal php.ini. Any idea where I'm going wrong?
Setting php ini settings in domain vhost.conf For a certain domain I'm trying to specify php ini settings for include_path and open_basedir, but I can't get the settings to take effect. I'm using Red Hat Enterprise Linux Server 5.11 (Tikanga) and Plesk 11.0.9. I created the file /var/www/vhosts/[my domain]/conf/vhost.conf and added the following directives: <Directory /var/www/vhosts/[my domain]/web> <IfModule sapi_apache2.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/" php_admin_value include_path "." </IfModule> <IfModule mod_php5.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/" php_admin_value include_path "." </IfModule> Options -Includes -ExecCGI Then I reloaded configuration for the domain and issued a graceful restart: /usr/local/psa/admin/bin/httpdmng --reconfigure-domains [my domain] /usr/sbin/apachectl graceful According to the phpinfo issue from the document root the settings have not changed from those in the normal php.ini. Any idea where I'm going wrong?
php, apache, redhat, plesk
5
683
1
https://stackoverflow.com/questions/34330895/setting-php-ini-settings-in-domain-vhost-conf
32,780,121
Attempting to run R with Atlas/OpenBLAS on redhat
For two days I've been trying to install Openblas/atlas with Lapack and use it in R. it's driving me crazy. I'm at a point where I can't even think anymore. My server uses: Red Hat Enterprise Linux Server release 6.6 (Santiago) Here is what I've installed so far: [root@tpdb05 atlas]# yum install atlas.x86_64 blas.x86_64 lapack.x86_64 Loaded plugins: product-id, refresh-packagekit, rhnplugin, security, subscription-manager Setting up Install Process Package atlas-3.8.4-2.el6.x86_64 already installed and latest version Package blas-3.2.1-4.el6.x86_64 already installed and latest version Package lapack-3.2.1-4.el6.x86_64 already installed and latest version [root@tpdb05 ruser]# yum install lapack.i686 Installed: lapack.i686 0:3.2.1-4.el6 Dependency Installed: blas.i686 0:3.2.1-4.el6 glibc.i686 0:2.12-1.166.el6_7.3 libgfortran.i686 0:4.4.7-16.el6 nss-softokn-freebl.i686 0:3.14.3-23.el6_7 Dependency Updated: glibc.x86_64 0:2.12-1.166.el6_7.3 glibc-common.x86_64 0:2.12-1.166.el6_7.3 glibc-devel.x86_64 0:2.12-1.166.el6_7.3 glibc-headers.x86_64 0:2.12-1.166.el6_7.3 nss-softokn-freebl.x86_64 0:3.14.3-23.el6_7 yum install atlas.i686 Installed: atlas.i686 0:3.8.4-2.el6 [root@tpdb05 SRPMS]# yum install rpm-build Installed: rpm-build.x86_64 0:4.8.0-47.el6 Dependency Installed: redhat-rpm-config.noarch 0:9.0.3-44.el6 Dependency Updated: rpm.x86_64 0:4.8.0-47.el6 rpm-libs.x86_64 0:4.8.0-47.el6 rpm-python.x86_64 0:4.8.0-47.el6 [root@tpdb05 SRPMS]# yum install atlas-c++-devel.x86_64 Installed: atlas-c++-devel.x86_64 0:0.6.1-1.el5.rf Dependency Installed: atlas-c++.x86_64 0:0.6.1-1.el5.rf several sources I've tried without success: 1 2 3 The R manual mentions the following: The usual way to specify ATLAS will be via --with-blas="-lf77blas -latlas" However I have no clue where to use this command. While installing R? I'm pretty sure it should be possible to simply swap between libraries.. How do I get R to use the atlas/openblas/lapack libraries?
Attempting to run R with Atlas/OpenBLAS on redhat For two days I've been trying to install Openblas/atlas with Lapack and use it in R. it's driving me crazy. I'm at a point where I can't even think anymore. My server uses: Red Hat Enterprise Linux Server release 6.6 (Santiago) Here is what I've installed so far: [root@tpdb05 atlas]# yum install atlas.x86_64 blas.x86_64 lapack.x86_64 Loaded plugins: product-id, refresh-packagekit, rhnplugin, security, subscription-manager Setting up Install Process Package atlas-3.8.4-2.el6.x86_64 already installed and latest version Package blas-3.2.1-4.el6.x86_64 already installed and latest version Package lapack-3.2.1-4.el6.x86_64 already installed and latest version [root@tpdb05 ruser]# yum install lapack.i686 Installed: lapack.i686 0:3.2.1-4.el6 Dependency Installed: blas.i686 0:3.2.1-4.el6 glibc.i686 0:2.12-1.166.el6_7.3 libgfortran.i686 0:4.4.7-16.el6 nss-softokn-freebl.i686 0:3.14.3-23.el6_7 Dependency Updated: glibc.x86_64 0:2.12-1.166.el6_7.3 glibc-common.x86_64 0:2.12-1.166.el6_7.3 glibc-devel.x86_64 0:2.12-1.166.el6_7.3 glibc-headers.x86_64 0:2.12-1.166.el6_7.3 nss-softokn-freebl.x86_64 0:3.14.3-23.el6_7 yum install atlas.i686 Installed: atlas.i686 0:3.8.4-2.el6 [root@tpdb05 SRPMS]# yum install rpm-build Installed: rpm-build.x86_64 0:4.8.0-47.el6 Dependency Installed: redhat-rpm-config.noarch 0:9.0.3-44.el6 Dependency Updated: rpm.x86_64 0:4.8.0-47.el6 rpm-libs.x86_64 0:4.8.0-47.el6 rpm-python.x86_64 0:4.8.0-47.el6 [root@tpdb05 SRPMS]# yum install atlas-c++-devel.x86_64 Installed: atlas-c++-devel.x86_64 0:0.6.1-1.el5.rf Dependency Installed: atlas-c++.x86_64 0:0.6.1-1.el5.rf several sources I've tried without success: 1 2 3 The R manual mentions the following: The usual way to specify ATLAS will be via --with-blas="-lf77blas -latlas" However I have no clue where to use this command. While installing R? I'm pretty sure it should be possible to simply swap between libraries.. How do I get R to use the atlas/openblas/lapack libraries?
r, installation, redhat, atlas, openblas
5
1,035
0
https://stackoverflow.com/questions/32780121/attempting-to-run-r-with-atlas-openblas-on-redhat
25,603,686
virtualenv error for another python version
I'm getting this error when I try to setup virtualenv for a version of python other than my system default: -sh-4.1$ virtualenv -p /usr/local/bin/python2.7 test Running virtualenv with interpreter /usr/local/bin/python2.7 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/virtualenv.py", line 8, in <module> import base64 File "/usr/local/lib/python2.7/base64.py", line 9, in <module> import struct File "/usr/local/lib/python2.7/struct.py", line 1, in <module> from _struct import * ImportError: No module named _struct System is RedHat and default system python version is 2.6.6. Any help would be much appreciated.
virtualenv error for another python version I'm getting this error when I try to setup virtualenv for a version of python other than my system default: -sh-4.1$ virtualenv -p /usr/local/bin/python2.7 test Running virtualenv with interpreter /usr/local/bin/python2.7 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/virtualenv.py", line 8, in <module> import base64 File "/usr/local/lib/python2.7/base64.py", line 9, in <module> import struct File "/usr/local/lib/python2.7/struct.py", line 1, in <module> from _struct import * ImportError: No module named _struct System is RedHat and default system python version is 2.6.6. Any help would be much appreciated.
python, virtualenv, redhat
5
389
0
https://stackoverflow.com/questions/25603686/virtualenv-error-for-another-python-version
17,783,941
How to recursivly download RPM dependencies?
I want to write a mini script that downloads all the recursive dependencies of an RPM package in Linux RedHat. When I use: repoquery -a --requires --recursive --resolve PACKAGE_NAME I'm not getting all the recursive dependencies, but when I use: repoquery -a --tree-requires PACKAGE_NAME I'm getting all the dependencies but I'm not getting a usable list that I can pipeline into yumdownloader . What should I do?
How to recursivly download RPM dependencies? I want to write a mini script that downloads all the recursive dependencies of an RPM package in Linux RedHat. When I use: repoquery -a --requires --recursive --resolve PACKAGE_NAME I'm not getting all the recursive dependencies, but when I use: repoquery -a --tree-requires PACKAGE_NAME I'm getting all the dependencies but I'm not getting a usable list that I can pipeline into yumdownloader . What should I do?
linux, bash, redhat, rpm, yum
5
12,587
2
https://stackoverflow.com/questions/17783941/how-to-recursivly-download-rpm-dependencies
48,070,042
How to detect upgrade when an RPM that obsoletes another RPM is being installed
RPM scriptlets are passed in $1 ( the number of packages of this name which will be left on the system when the action completes ) so they can determine whether a package upgrade or removal is occurring. For reasons outside my control, I believe the next version of the package may have a different package name than the first version. I tried to create a new package that "obsoletes" the old one and upgraded using it. However, the old package postun scriptlet still got $1 == 0 and my postun cleanup script ran. This is a bit of an edge case, because technically there are 0 packages with that name remaining, but I thought the obsoletes case might pretend that there's still a package with that name during the upgrade. Is there a way to test for the situation when a package is being obsoleted so that the scriptlet can determine an upgrade is occurring instead of a package removal?
How to detect upgrade when an RPM that obsoletes another RPM is being installed RPM scriptlets are passed in $1 ( the number of packages of this name which will be left on the system when the action completes ) so they can determine whether a package upgrade or removal is occurring. For reasons outside my control, I believe the next version of the package may have a different package name than the first version. I tried to create a new package that "obsoletes" the old one and upgraded using it. However, the old package postun scriptlet still got $1 == 0 and my postun cleanup script ran. This is a bit of an edge case, because technically there are 0 packages with that name remaining, but I thought the obsoletes case might pretend that there's still a package with that name during the upgrade. Is there a way to test for the situation when a package is being obsoleted so that the scriptlet can determine an upgrade is occurring instead of a package removal?
centos, redhat, fedora, rpm, rpm-spec
5
1,532
1
https://stackoverflow.com/questions/48070042/how-to-detect-upgrade-when-an-rpm-that-obsoletes-another-rpm-is-being-installed
70,519,032
Why am I unable to update using YUM - using CentOS Stream 8
I'm unable to update any packages within CentOS Stream 8. I did create two subscriptions at [URL] Subscription 1: 60 Day Product Trial of Red Hat Enterprise Linux Server with Smart Management, Monitoring, and all Add-Ons, Self-Supported (Physical or Virtual Nodes) Subscription 2: Red Hat Beta Access I have assigned both of these subscriptions to my system, and rebooted. When I attempt to check for updates, I receive the following: [lloyd@localhost ~]$ sudo yum check-update Updating Subscription Management repositories. Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/epel.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/epel.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/epel.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Last metadata expiration check: 0:00:24 ago on Wed 29 Dec 2021 11:45:08 AM GMT. Any advice as to where I'm going wrong would be greatly appreciated. Thanks
Why am I unable to update using YUM - using CentOS Stream 8 I'm unable to update any packages within CentOS Stream 8. I did create two subscriptions at [URL] Subscription 1: 60 Day Product Trial of Red Hat Enterprise Linux Server with Smart Management, Monitoring, and all Add-Ons, Self-Supported (Physical or Virtual Nodes) Subscription 2: Red Hat Beta Access I have assigned both of these subscriptions to my system, and rebooted. When I attempt to check for updates, I receive the following: [lloyd@localhost ~]$ sudo yum check-update Updating Subscription Management repositories. Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/epel.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/epel.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/epel.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/pgdg-redhat-all.repo; Configuration: OptionBinding with id "failovermethod" does not exist Last metadata expiration check: 0:00:24 ago on Wed 29 Dec 2021 11:45:08 AM GMT. Any advice as to where I'm going wrong would be greatly appreciated. Thanks
linux, centos, redhat, fedora, yum
4
10,938
1
https://stackoverflow.com/questions/70519032/why-am-i-unable-to-update-using-yum-using-centos-stream-8
9,107,646
install make in redhat - make command not found
-bash: make: command not found [root@Ritely r2]# I am using redhat and I need to install make. any help please. I am setting up a vps server and need to install python: $ cd reddit/r2 $ make pyx $ python setup.py build $ sudo python setup.py develop $ make it says that make is not found
install make in redhat - make command not found -bash: make: command not found [root@Ritely r2]# I am using redhat and I need to install make. any help please. I am setting up a vps server and need to install python: $ cd reddit/r2 $ make pyx $ python setup.py build $ sudo python setup.py develop $ make it says that make is not found
linux, redhat
4
34,800
2
https://stackoverflow.com/questions/9107646/install-make-in-redhat-make-command-not-found
2,144,037
Tomcat 6 Heap Size - Is this correct?
I am running multiple tomcats on a Red Hat box and I would like to configure separate heap size for each of them (some instances use more memory). Can I set the heap size min/max bt entering the following into the catalina.sh file: CATALINA_OPTS="-Xms64m -Xmx256m" Do I need add 'export'? i.e. export CATALINA_OPTS="-Xms64m -Xmx256m"
Tomcat 6 Heap Size - Is this correct? I am running multiple tomcats on a Red Hat box and I would like to configure separate heap size for each of them (some instances use more memory). Can I set the heap size min/max bt entering the following into the catalina.sh file: CATALINA_OPTS="-Xms64m -Xmx256m" Do I need add 'export'? i.e. export CATALINA_OPTS="-Xms64m -Xmx256m"
java, tomcat6, redhat, heap-memory, catalina.out
4
14,478
2
https://stackoverflow.com/questions/2144037/tomcat-6-heap-size-is-this-correct
8,542,593
PECL and PHP Build Directory
I'm trying to install a PECL package, and I received this error. I'm unsure what to do about it, so was hoping someone may be able to offer some help: # pecl install -f ssh2 WARNING: failed to download pecl.php.net/ssh2 within preferred state "stable", will instead download version 0.11.3, stability "beta" downloading ssh2-0.11.3.tgz ... Starting to download ssh2-0.11.3.tgz (23,062 bytes) ........done: 23,062 bytes 5 source files, building running: phpize Cannot find build files at '/usr/lib64/php/build'. Please check your PHP installation. ERROR: `phpize' failed
PECL and PHP Build Directory I'm trying to install a PECL package, and I received this error. I'm unsure what to do about it, so was hoping someone may be able to offer some help: # pecl install -f ssh2 WARNING: failed to download pecl.php.net/ssh2 within preferred state "stable", will instead download version 0.11.3, stability "beta" downloading ssh2-0.11.3.tgz ... Starting to download ssh2-0.11.3.tgz (23,062 bytes) ........done: 23,062 bytes 5 source files, building running: phpize Cannot find build files at '/usr/lib64/php/build'. Please check your PHP installation. ERROR: `phpize' failed
php, linux, shell, redhat
4
7,462
1
https://stackoverflow.com/questions/8542593/pecl-and-php-build-directory
51,176,209
HTTP_PROXY not working
I have a situation where I can download a package from the internet using curl through a proxy server using the command curl -x [URL] --proxy-user <proxy_user>:<proxy_user_password> -L [URL] -o <package>.rpm But if I set the proxy with the command: setenv HTTP_PROXY [URL] and then I try to get the package with the command: curl -O [URL] -o package.rpm it doesn't work, it just return time out after a long time. My intention is to provide to that server access to the internet outside the internal network and I'm using curl to test this. The OS is RedHat 6.9 and the shell that I'm using is '/bin/tcsh'. What I'm doing wrong in this case? Thanks for any help.
HTTP_PROXY not working I have a situation where I can download a package from the internet using curl through a proxy server using the command curl -x [URL] --proxy-user <proxy_user>:<proxy_user_password> -L [URL] -o <package>.rpm But if I set the proxy with the command: setenv HTTP_PROXY [URL] and then I try to get the package with the command: curl -O [URL] -o package.rpm it doesn't work, it just return time out after a long time. My intention is to provide to that server access to the internet outside the internal network and I'm using curl to test this. The OS is RedHat 6.9 and the shell that I'm using is '/bin/tcsh'. What I'm doing wrong in this case? Thanks for any help.
curl, proxy, redhat
4
6,978
1
https://stackoverflow.com/questions/51176209/http-proxy-not-working
27,900,895
Common Lisp on CentOS 7
I'm looking for a way to get a working Common Lisp compiler in CentOS 7. It seems that neither base or EPEL repos contain any of the widely available open-source Lisp compilers. There are bits of info regarding CLISP and SBCL on CentOS 6 but none about any compiler on CentOS 7. Am I missing something here or has the switch from RHEL6 to RHEL7 completely forgot about CL compilers ?
Common Lisp on CentOS 7 I'm looking for a way to get a working Common Lisp compiler in CentOS 7. It seems that neither base or EPEL repos contain any of the widely available open-source Lisp compilers. There are bits of info regarding CLISP and SBCL on CentOS 6 but none about any compiler on CentOS 7. Am I missing something here or has the switch from RHEL6 to RHEL7 completely forgot about CL compilers ?
lisp, common-lisp, redhat, sbcl, centos7
4
5,794
5
https://stackoverflow.com/questions/27900895/common-lisp-on-centos-7
615,282
Where are default LS_COLORS set in RHEL 5.x?
In a terminal in Red Hat Enterprise Linux 5.x, running: [$] Env returns (among other things): "LS_COLORS=no=00:fi=00:di=01;34:ln=01;36:pi=40;33 . . ." Most of the content in LS_COLORS I find in the file: /etc/DIR_COLORS BUT the values " no=00:fi=00:di=01;34:ln=01;36:pi=40;33 etc.", I have no success finding, even after grepping through the system. In what file(s) are these values defined? Yes, I know I can set the content of LS_COLORS to the values I please, but what I wonder about is where the values above are defined .
Where are default LS_COLORS set in RHEL 5.x? In a terminal in Red Hat Enterprise Linux 5.x, running: [$] Env returns (among other things): "LS_COLORS=no=00:fi=00:di=01;34:ln=01;36:pi=40;33 . . ." Most of the content in LS_COLORS I find in the file: /etc/DIR_COLORS BUT the values " no=00:fi=00:di=01;34:ln=01;36:pi=40;33 etc.", I have no success finding, even after grepping through the system. In what file(s) are these values defined? Yes, I know I can set the content of LS_COLORS to the values I please, but what I wonder about is where the values above are defined .
terminal, redhat, rhel, ls-colors
4
18,209
3
https://stackoverflow.com/questions/615282/where-are-default-ls-colors-set-in-rhel-5-x
55,036,608
Oracle database installer in linux [INS-10102] Installer initialization failed
i am trying to install oracle database in red hat enterprise linux and once i run the installer using: [oracle@linux database]$ ./runInstaller the OUI shows the message: [INS-10102] Installer initialization failed. Cause - An unexpected error occurred while initializing the Installer. Action - Contact Oracle Support Services or refer logs Summary - [INS-10012] Setup driver initialization failed. - no oraInstaller in java.library.path the log file shows this ID: oracle.install.commons.util.exception.AbstractErrorAdvisor:8 oracle.install.commons.base.driver.common.InstallerException: [INS-10102] Installer initialization failed. at oracle.install.commons.base.driver.common.Installer.run(Installer.java:534) at oracle.install.ivw.common.util.OracleInstaller.run(OracleInstaller.java:133) at oracle.install.ivw.db.driver.DBInstaller.run(DBInstaller.java:139) at oracle.install.commons.util.Application.startup(Application.java:1072) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:181) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:198) at oracle.install.commons.base.driver.common.Installer.startup(Installer.java:566) at oracle.install.ivw.db.driver.DBInstaller.startup(DBInstaller.java:127) at oracle.install.ivw.db.driver.DBInstaller.main(DBInstaller.java:165) Caused by: oracle.install.commons.base.driver.common.SetupDriverException: [INS-10012] Setup driver initialization failed. at oracle.install.driver.oui.OUIInstallDriver.load(OUIInstallDriver.java:431) at oracle.install.ivw.db.driver.DBSetupDriver.load(DBSetupDriver.java:289) at oracle.install.commons.base.driver.common.Installer.run(Installer.java:516) ... 8 more Caused by: java.lang.UnsatisfiedLinkError: no oraInstaller in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at oracle.sysman.oii.oiip.osd.unix.OiipuUnixOps.loadNativeLib(OiipuUnixOps.java:380) at oracle.sysman.oii.oiip.osd.unix.OiipuUnixOps.<clinit>(OiipuUnixOps.java:128) at oracle.sysman.oii.oiic.OiicPullSession.createDuplicateStreamsForLog(OiicPullSession.java:5382) at oracle.sysman.oii.oiic.OiicPullSession.createDuplicateStreams(OiicPullSession.java:5482) at oracle.sysman.oii.oiic.OiicAPIInstaller.initInstallEnvironment(OiicAPIInstaller.java:506) at oracle.install.driver.oui.OUIInstallDriver.load(OUIInstallDriver.java:422) ... 10 more here is a Screen shot for the error
Oracle database installer in linux [INS-10102] Installer initialization failed i am trying to install oracle database in red hat enterprise linux and once i run the installer using: [oracle@linux database]$ ./runInstaller the OUI shows the message: [INS-10102] Installer initialization failed. Cause - An unexpected error occurred while initializing the Installer. Action - Contact Oracle Support Services or refer logs Summary - [INS-10012] Setup driver initialization failed. - no oraInstaller in java.library.path the log file shows this ID: oracle.install.commons.util.exception.AbstractErrorAdvisor:8 oracle.install.commons.base.driver.common.InstallerException: [INS-10102] Installer initialization failed. at oracle.install.commons.base.driver.common.Installer.run(Installer.java:534) at oracle.install.ivw.common.util.OracleInstaller.run(OracleInstaller.java:133) at oracle.install.ivw.db.driver.DBInstaller.run(DBInstaller.java:139) at oracle.install.commons.util.Application.startup(Application.java:1072) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:181) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:198) at oracle.install.commons.base.driver.common.Installer.startup(Installer.java:566) at oracle.install.ivw.db.driver.DBInstaller.startup(DBInstaller.java:127) at oracle.install.ivw.db.driver.DBInstaller.main(DBInstaller.java:165) Caused by: oracle.install.commons.base.driver.common.SetupDriverException: [INS-10012] Setup driver initialization failed. at oracle.install.driver.oui.OUIInstallDriver.load(OUIInstallDriver.java:431) at oracle.install.ivw.db.driver.DBSetupDriver.load(DBSetupDriver.java:289) at oracle.install.commons.base.driver.common.Installer.run(Installer.java:516) ... 8 more Caused by: java.lang.UnsatisfiedLinkError: no oraInstaller in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at oracle.sysman.oii.oiip.osd.unix.OiipuUnixOps.loadNativeLib(OiipuUnixOps.java:380) at oracle.sysman.oii.oiip.osd.unix.OiipuUnixOps.<clinit>(OiipuUnixOps.java:128) at oracle.sysman.oii.oiic.OiicPullSession.createDuplicateStreamsForLog(OiicPullSession.java:5382) at oracle.sysman.oii.oiic.OiicPullSession.createDuplicateStreams(OiicPullSession.java:5482) at oracle.sysman.oii.oiic.OiicAPIInstaller.initInstallEnvironment(OiicAPIInstaller.java:506) at oracle.install.driver.oui.OUIInstallDriver.load(OUIInstallDriver.java:422) ... 10 more here is a Screen shot for the error
linux, database, oracle-database, redhat, database-installation
4
16,227
3
https://stackoverflow.com/questions/55036608/oracle-database-installer-in-linux-ins-10102-installer-initialization-failed
5,589,396
How to install java jdk in RHEL?
I am following example commands from other sites but it isn't helping! What am I doing wrong? chmod +x jdk-6u24-linux-i586-rpm.bin ./jdk-6u24-linux-i586-rpm.bin Results give me: bash: ./jdk-6u24-linux-i586-rpm.bin: /bin/sh: bad interpreter: Permission denied Ok.. after doing sh jdk-6u24-linux-i586-rpm.bin as suggested below, I get this: Did the install fail? Is the file corrupted??? Thanks!!
How to install java jdk in RHEL? I am following example commands from other sites but it isn't helping! What am I doing wrong? chmod +x jdk-6u24-linux-i586-rpm.bin ./jdk-6u24-linux-i586-rpm.bin Results give me: bash: ./jdk-6u24-linux-i586-rpm.bin: /bin/sh: bad interpreter: Permission denied Ok.. after doing sh jdk-6u24-linux-i586-rpm.bin as suggested below, I get this: Did the install fail? Is the file corrupted??? Thanks!!
linux, java, redhat
4
27,804
2
https://stackoverflow.com/questions/5589396/how-to-install-java-jdk-in-rhel
76,356,076
Cannot load from short array because &quot;sun.awt.FontConfiguration.head&quot; is null thrown with Java 17 and Jasper 6.20.0
We upgrading the our application to Java 17 (from Java 8) and Jasper to 6.20.0 (from 6.0.3). During this upgrade Jasper reports getting failed with the two exceptions. The fonts is already exported and used as an extension jar which was working fine with java 8 and jasper 6.0.3. But once the upgrade is done, following given exceptions occurs. OS : red hat linux 7.9 tomcat : jws 5.4 (-Djava.awt.headless=true) jdk : Oracle Java 17 "Caused by: java.lang.NullPointerException: Cannot load from short array because "sun.awt.FontConfiguration.head" is null" Could not initialize class net.sf.jasperreports.engine.util.JRStyledTextParser. Tried on following resolutions but failed with them first tried to enable healess mode, but it did not resolve it most of the dependent optional jar for the Jasper 6.20.0 was aslo added but did not resolve it the jasper file for th report was regenerated based on the java 17 , but did not helped extracted the font from the extetion jar and added with the resources folder but it did not resolve
Cannot load from short array because &quot;sun.awt.FontConfiguration.head&quot; is null thrown with Java 17 and Jasper 6.20.0 We upgrading the our application to Java 17 (from Java 8) and Jasper to 6.20.0 (from 6.0.3). During this upgrade Jasper reports getting failed with the two exceptions. The fonts is already exported and used as an extension jar which was working fine with java 8 and jasper 6.0.3. But once the upgrade is done, following given exceptions occurs. OS : red hat linux 7.9 tomcat : jws 5.4 (-Djava.awt.headless=true) jdk : Oracle Java 17 "Caused by: java.lang.NullPointerException: Cannot load from short array because "sun.awt.FontConfiguration.head" is null" Could not initialize class net.sf.jasperreports.engine.util.JRStyledTextParser. Tried on following resolutions but failed with them first tried to enable healess mode, but it did not resolve it most of the dependent optional jar for the Jasper 6.20.0 was aslo added but did not resolve it the jasper file for th report was regenerated based on the java 17 , but did not helped extracted the font from the extetion jar and added with the resources folder but it did not resolve
java, oracle-database, nullpointerexception, jasper-reports, redhat
4
12,585
2
https://stackoverflow.com/questions/76356076/cannot-load-from-short-array-because-sun-awt-fontconfiguration-head-is-null-th
16,725,804
Hadoop pseudo distributed mode - Datanode and tasktracker not starting
I am running a Red Hat Enterprise Linux Server release 6.4 (Santiago) distribution with Hadoop 1.1.2 installed on it. I have made the required configurations to enable the pseudo distributed mode. But on trying to run hadoop, the datanode and tasktracker don't start. I am not able to copy any files to hdfs. [hduser@is-joshbloom-hadoop hadoop]$ hadoop dfs -put README.txt /input Warning: $HADOOP_HOME is deprecated. 13/05/23 16:42:00 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /input could only be replicated to 0 nodes, instead of 1 Also after trying hadoop-daemon.sh start datanode I get the message: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-is-joshbloom-hadoop.out same goes for tasktracker. But when I try the same command for namenode, secondarynamenode, jobtracker they are seem to be running. namenode running as process 32933. Stop it first. I tried the following solutions: Reformatting namenode Reinstalling hadoop Installing different version of hadoop (1.0.4) None seem to work. I have followed the same installation steps on my Mac and on amazon ubuntu VM and it works perfectly. How can I get hadoop working? Thanks! *UPDATE** Here is the log entry of namenode 2013-05-23 16:27:44,087 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode STARTUP_MSG: host = java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop STARTUP_MSG: args = [] STARTUP_MSG: version = 1.1.2 STARTUP_MSG: build = [URL] -r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013 ************************************************************/ 2013-05-23 16:27:44,382 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2013-05-23 16:27:44,432 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2013-05-23 16:27:44,446 ERROR org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error getting localhost name. Using 'localhost'... java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop at java.net.InetAddress.getLocalHost(InetAddress.java:1438) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.getHostname(MetricsSystemImpl.java:463) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSystem(MetricsSystemImpl.java:394) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:390) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:152) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:133) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:40) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1589) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1608) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1734) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1751) Caused by: java.net.UnknownHostException: is-joshbloom-hadoop at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258) at java.net.InetAddress.getLocalHost(InetAddress.java:1434) ... 11 more 2013-05-23 16:27:44,453 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2013-05-23 16:27:44,453 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started 2013-05-23 16:27:44,768 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 2013-05-23 16:27:44,914 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library 2013-05-23 16:27:45,212 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop at java.net.InetAddress.getLocalHost(InetAddress.java:1438) at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:271) at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:289) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:301) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1651) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1608) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1734) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1751) Caused by: java.net.UnknownHostException: is-joshbloom-hadoop at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258) at java.net.InetAddress.getLocalHost(InetAddress.java:1434) ... 8 more 2013-05-23 16:27:45,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop ************************************************************/ *UPDATE*** content of /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
Hadoop pseudo distributed mode - Datanode and tasktracker not starting I am running a Red Hat Enterprise Linux Server release 6.4 (Santiago) distribution with Hadoop 1.1.2 installed on it. I have made the required configurations to enable the pseudo distributed mode. But on trying to run hadoop, the datanode and tasktracker don't start. I am not able to copy any files to hdfs. [hduser@is-joshbloom-hadoop hadoop]$ hadoop dfs -put README.txt /input Warning: $HADOOP_HOME is deprecated. 13/05/23 16:42:00 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /input could only be replicated to 0 nodes, instead of 1 Also after trying hadoop-daemon.sh start datanode I get the message: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-is-joshbloom-hadoop.out same goes for tasktracker. But when I try the same command for namenode, secondarynamenode, jobtracker they are seem to be running. namenode running as process 32933. Stop it first. I tried the following solutions: Reformatting namenode Reinstalling hadoop Installing different version of hadoop (1.0.4) None seem to work. I have followed the same installation steps on my Mac and on amazon ubuntu VM and it works perfectly. How can I get hadoop working? Thanks! *UPDATE** Here is the log entry of namenode 2013-05-23 16:27:44,087 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode STARTUP_MSG: host = java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop STARTUP_MSG: args = [] STARTUP_MSG: version = 1.1.2 STARTUP_MSG: build = [URL] -r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013 ************************************************************/ 2013-05-23 16:27:44,382 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2013-05-23 16:27:44,432 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2013-05-23 16:27:44,446 ERROR org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error getting localhost name. Using 'localhost'... java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop at java.net.InetAddress.getLocalHost(InetAddress.java:1438) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.getHostname(MetricsSystemImpl.java:463) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSystem(MetricsSystemImpl.java:394) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:390) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:152) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:133) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:40) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1589) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1608) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1734) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1751) Caused by: java.net.UnknownHostException: is-joshbloom-hadoop at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258) at java.net.InetAddress.getLocalHost(InetAddress.java:1434) ... 11 more 2013-05-23 16:27:44,453 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2013-05-23 16:27:44,453 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started 2013-05-23 16:27:44,768 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 2013-05-23 16:27:44,914 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library 2013-05-23 16:27:45,212 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop at java.net.InetAddress.getLocalHost(InetAddress.java:1438) at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:271) at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:289) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:301) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1651) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1608) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1734) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1751) Caused by: java.net.UnknownHostException: is-joshbloom-hadoop at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258) at java.net.InetAddress.getLocalHost(InetAddress.java:1434) ... 8 more 2013-05-23 16:27:45,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop ************************************************************/ *UPDATE*** content of /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
hadoop, hdfs, redhat
4
9,777
3
https://stackoverflow.com/questions/16725804/hadoop-pseudo-distributed-mode-datanode-and-tasktracker-not-starting
1,774,920
Can I use a shared library compiled on Ubuntu on a Redhat Linux machine?
I have compiled a shared library on my Ubuntu 9.10 desktop. I want to send the shared lib to a co-developer who has a Red Hat Enterprise 5 box. Can he use my shared lib on his machine?
Can I use a shared library compiled on Ubuntu on a Redhat Linux machine? I have compiled a shared library on my Ubuntu 9.10 desktop. I want to send the shared lib to a co-developer who has a Red Hat Enterprise 5 box. Can he use my shared lib on his machine?
c++, ubuntu, shared-libraries, redhat
4
5,214
5
https://stackoverflow.com/questions/1774920/can-i-use-a-shared-library-compiled-on-ubuntu-on-a-redhat-linux-machine
63,043,312
Openshift - How to get current memory usage of POD List
I want to see current memory usage of PODs. I tried "oc get pods | grep elastic-*" to get POD details elastic-index-5-kwz79 1/1 Running 0 1h elastic-index-5-lcfzp 1/1 Running 0 1h elastic-master-0 1/1 Running 0 1h elastic-master-1 1/1 Running 0 1h elastic-master-2 1/1 Running 0 1h elastic-query-2-wspl5 1/1 Running 0 1h Table is showing status and last running details but I am looking for current memory usage and total memory details For Example - Name Total Memory Available Memory elastic-index-5-kwz79 1024MB 723MB
Openshift - How to get current memory usage of POD List I want to see current memory usage of PODs. I tried "oc get pods | grep elastic-*" to get POD details elastic-index-5-kwz79 1/1 Running 0 1h elastic-index-5-lcfzp 1/1 Running 0 1h elastic-master-0 1/1 Running 0 1h elastic-master-1 1/1 Running 0 1h elastic-master-2 1/1 Running 0 1h elastic-query-2-wspl5 1/1 Running 0 1h Table is showing status and last running details but I am looking for current memory usage and total memory details For Example - Name Total Memory Available Memory elastic-index-5-kwz79 1024MB 723MB
openshift, redhat, openshift-origin, openshift-enterprise
4
25,880
2
https://stackoverflow.com/questions/63043312/openshift-how-to-get-current-memory-usage-of-pod-list
59,641,984
Red Hat: using &lt;atomic&gt; compiles fine but linker can&#39;t find __atomic_store_16; what library?
I'm using atomic<> for the first time, and just as using <thread> requires you to link a thread library, it seems like using <atomic> wants you to do... something. What? > uname -a Linux sdclxd00239 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Dec 28 14:23:39 EST 2017 x86_64 x86_64 x86_64 GNU/Linu > g++ Foo.cxx -g -o MsgQueueNoLock -pthread /tmp/ccnGOKUG.o: In function std::atomic<Ptr_T>::store(Ptr_T, std::memory_order)': /opt/rh/devtoolset-7/root/usr/include/c++/7/atomic:239: undefined reference to __atomic_store_16' /tmp/ccnGOKUG.o: In function std::atomic<Ptr_T>::load(std::memory_order) const': /opt/rh/devtoolset-7/root/usr/include/c++/7/atomic:250: undefined reference to __atomic_load_16' collect2: error: ld returned 1 exit status> g++ Foo.cxx -g -o Foo -pthread /tmp/ccnGOKUG.o: In function std::atomic<Ptr_T>::store(Ptr_T, std::memory_order)': /opt/rh/devtoolset-7/root/usr/include/c++/7/atomic:239: undefined reference to __atomic_store_16' /tmp/ccnGOKUG.o: In function std::atomic<Ptr_T>::load(std::memory_order) const': /opt/rh/devtoolset-7/root/usr/include/c++/7/atomic:250: undefined reference to __atomic_load_16' collect2: error: ld returned 1 exit status UPDATE: I need to use -latomic . Fair enough! However I can't find one I can actually use. First I look under /usr/lib , and I see I have a simlink under gcc/.../4.8.2 pointing to gcc/.../4.8.5 ?!!? I've never in my life seen an old version depend on a new version though the timestamp causes me to suspect either manual intervention by someone in the past, or a complicated history. > l find /usr/lib -name '*atomic*' -rw-r--r--. 2 root root 1379 Jul 13 2017 /usr/lib/python2.7/site-packages/sos/plugins/atomichost.pyo -rw-r--r--. 2 root root 1379 Jul 13 2017 /usr/lib/python2.7/site-packages/sos/plugins/atomichost.pyc -rw-r--r--. 1 root root 1672 Jul 13 2017 /usr/lib/python2.7/site-packages/sos/plugins/atomichost.py -rw-r--r-- 1 root root 40 Sep 22 2017 /usr/lib/gcc/x86_64-redhat-linux/4.8.2/libatomic.so -rw-r--r-- 1 root root 38 Sep 22 2017 /usr/lib/gcc/x86_64-redhat-linux/4.8.2/32/libatomic.so lrwxrwxrwx 1 root root 44 Jul 3 2018 /usr/lib/gcc/x86_64-redhat-linux/4.8.2/32/libatomic.a -> ../../../i686-redhat-linux/4.8.5/libatomic.a Something on the 'net suggested I might find joy under /usr/local/lib but in fact joy is not to be found: > find /usr/local/lib -name '*atomic*' > The gcc actually installed is old (4.8.5) and I'm running 7.2.1 via the scl utility, which puts /opt/rh/devtoolset-7/root/usr/bin/gcc into the path. Anticipating that the requisite lib was probably delivered with the gcc, I look at /opt/rh/devtoolset-7 ... and like a bad dream the libatomic.a again is a symlink to a non-existent file. > l find /opt/rh/devtoolset-7/ -name '*atomic*' : (headers elided) : -rw-r--r-- 1 root root 40975 Aug 31 2017 /opt/rh/devtoolset-7/root/usr/include/c++/7/atomic -rw-r--r-- 1 root root 80 Aug 31 2017 /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32/libatomic.so -rw-r--r-- 1 root root 1553 Oct 6 2017 /opt/rh/devtoolset-7/root/usr/share/systemtap/tapset/linux/atomic.stp lrwxrwxrwx 1 root root 40 Jul 3 2018 /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32/libatomic.a -> ../../../i686-redhat-linux/7/libatomic.a So using -L options with every path I can think of based on what find found, here's all the errors: > g++ MsgQueueNoLock.cxx -g -o MsgQueueNoLock -pthread -latomic /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: cannot find -latomic collect2: error: ld returned 1 exit status > g++ MsgQueueNoLock.cxx -g -o MsgQueueNoLock -pthread -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2 -latomic /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: cannot find /usr/lib64/libatomic.so.1.0.0 collect2: error: ld returned 1 exit status > g++ MsgQueueNoLock.cxx -g -o MsgQueueNoLock -pthread -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2/32 -latomic /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: cannot find /usr/lib/libatomic.so.1.0.0 /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.8.2/32/libstdc++.so when searching for -lstdc++ /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.8.2/32/libgcc_s.so when searching for -lgcc_s /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.8.2/32/libgcc.a when searching for libgcc.a collect2: error: ld returned 1 exit status > g++ MsgQueueNoLock.cxx -g -o MsgQueueNoLock -pthread -L/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32 -latomic /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32/libatomic.so when searching for -latomic /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: cannot find -latomic /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32/libstdc++.so when searching for -lstdc++ /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32/libgcc_s.so when searching for -lgcc_s /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32/libgcc.a when searching for libgcc.a collect2: error: ld returned 1 exit status
Red Hat: using &lt;atomic&gt; compiles fine but linker can&#39;t find __atomic_store_16; what library? I'm using atomic<> for the first time, and just as using <thread> requires you to link a thread library, it seems like using <atomic> wants you to do... something. What? > uname -a Linux sdclxd00239 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Dec 28 14:23:39 EST 2017 x86_64 x86_64 x86_64 GNU/Linu > g++ Foo.cxx -g -o MsgQueueNoLock -pthread /tmp/ccnGOKUG.o: In function std::atomic<Ptr_T>::store(Ptr_T, std::memory_order)': /opt/rh/devtoolset-7/root/usr/include/c++/7/atomic:239: undefined reference to __atomic_store_16' /tmp/ccnGOKUG.o: In function std::atomic<Ptr_T>::load(std::memory_order) const': /opt/rh/devtoolset-7/root/usr/include/c++/7/atomic:250: undefined reference to __atomic_load_16' collect2: error: ld returned 1 exit status> g++ Foo.cxx -g -o Foo -pthread /tmp/ccnGOKUG.o: In function std::atomic<Ptr_T>::store(Ptr_T, std::memory_order)': /opt/rh/devtoolset-7/root/usr/include/c++/7/atomic:239: undefined reference to __atomic_store_16' /tmp/ccnGOKUG.o: In function std::atomic<Ptr_T>::load(std::memory_order) const': /opt/rh/devtoolset-7/root/usr/include/c++/7/atomic:250: undefined reference to __atomic_load_16' collect2: error: ld returned 1 exit status UPDATE: I need to use -latomic . Fair enough! However I can't find one I can actually use. First I look under /usr/lib , and I see I have a simlink under gcc/.../4.8.2 pointing to gcc/.../4.8.5 ?!!? I've never in my life seen an old version depend on a new version though the timestamp causes me to suspect either manual intervention by someone in the past, or a complicated history. > l find /usr/lib -name '*atomic*' -rw-r--r--. 2 root root 1379 Jul 13 2017 /usr/lib/python2.7/site-packages/sos/plugins/atomichost.pyo -rw-r--r--. 2 root root 1379 Jul 13 2017 /usr/lib/python2.7/site-packages/sos/plugins/atomichost.pyc -rw-r--r--. 1 root root 1672 Jul 13 2017 /usr/lib/python2.7/site-packages/sos/plugins/atomichost.py -rw-r--r-- 1 root root 40 Sep 22 2017 /usr/lib/gcc/x86_64-redhat-linux/4.8.2/libatomic.so -rw-r--r-- 1 root root 38 Sep 22 2017 /usr/lib/gcc/x86_64-redhat-linux/4.8.2/32/libatomic.so lrwxrwxrwx 1 root root 44 Jul 3 2018 /usr/lib/gcc/x86_64-redhat-linux/4.8.2/32/libatomic.a -> ../../../i686-redhat-linux/4.8.5/libatomic.a Something on the 'net suggested I might find joy under /usr/local/lib but in fact joy is not to be found: > find /usr/local/lib -name '*atomic*' > The gcc actually installed is old (4.8.5) and I'm running 7.2.1 via the scl utility, which puts /opt/rh/devtoolset-7/root/usr/bin/gcc into the path. Anticipating that the requisite lib was probably delivered with the gcc, I look at /opt/rh/devtoolset-7 ... and like a bad dream the libatomic.a again is a symlink to a non-existent file. > l find /opt/rh/devtoolset-7/ -name '*atomic*' : (headers elided) : -rw-r--r-- 1 root root 40975 Aug 31 2017 /opt/rh/devtoolset-7/root/usr/include/c++/7/atomic -rw-r--r-- 1 root root 80 Aug 31 2017 /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32/libatomic.so -rw-r--r-- 1 root root 1553 Oct 6 2017 /opt/rh/devtoolset-7/root/usr/share/systemtap/tapset/linux/atomic.stp lrwxrwxrwx 1 root root 40 Jul 3 2018 /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32/libatomic.a -> ../../../i686-redhat-linux/7/libatomic.a So using -L options with every path I can think of based on what find found, here's all the errors: > g++ MsgQueueNoLock.cxx -g -o MsgQueueNoLock -pthread -latomic /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: cannot find -latomic collect2: error: ld returned 1 exit status > g++ MsgQueueNoLock.cxx -g -o MsgQueueNoLock -pthread -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2 -latomic /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: cannot find /usr/lib64/libatomic.so.1.0.0 collect2: error: ld returned 1 exit status > g++ MsgQueueNoLock.cxx -g -o MsgQueueNoLock -pthread -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2/32 -latomic /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: cannot find /usr/lib/libatomic.so.1.0.0 /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.8.2/32/libstdc++.so when searching for -lstdc++ /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.8.2/32/libgcc_s.so when searching for -lgcc_s /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.8.2/32/libgcc.a when searching for libgcc.a collect2: error: ld returned 1 exit status > g++ MsgQueueNoLock.cxx -g -o MsgQueueNoLock -pthread -L/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32 -latomic /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32/libatomic.so when searching for -latomic /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: cannot find -latomic /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32/libstdc++.so when searching for -lstdc++ /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32/libgcc_s.so when searching for -lgcc_s /opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: skipping incompatible /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/32/libgcc.a when searching for libgcc.a collect2: error: ld returned 1 exit status
c++, gcc, linker-errors, redhat, stdatomic
4
4,598
3
https://stackoverflow.com/questions/59641984/red-hat-using-atomic-compiles-fine-but-linker-cant-find-atomic-store-16-w
57,832,879
How to uninstall bazel 0.29.0 in order to install 0.26.1 because of tensorflow
I am using Redhat 7.3. I need to install tensorflow for that I already installed bazel 0.29.0 and when I wanted to configure tensorflow it requires bazel 0.26.1. Thats why i tried to uninstall bazel 0.29.0 but was not able to do it. I am new in Redhat community , could you please show me a way how to solve this problem ? Thanks in advance.
How to uninstall bazel 0.29.0 in order to install 0.26.1 because of tensorflow I am using Redhat 7.3. I need to install tensorflow for that I already installed bazel 0.29.0 and when I wanted to configure tensorflow it requires bazel 0.26.1. Thats why i tried to uninstall bazel 0.29.0 but was not able to do it. I am new in Redhat community , could you please show me a way how to solve this problem ? Thanks in advance.
tensorflow, redhat, bazel
4
11,003
4
https://stackoverflow.com/questions/57832879/how-to-uninstall-bazel-0-29-0-in-order-to-install-0-26-1-because-of-tensorflow
37,129,114
Cannot run solr (cannot open log file)
Been trying to get solr running for a while now... finally seemed like I got it. Following this tutorial. Ran this command bin/solr start And saw this text Waiting up to 30 seconds to see Solr running on port 8983 But then... Still not seeing Solr listening on 8983 after 30 seconds! tail: cannot open `/root/downloads/solr-6.0.0/server/logs/solr.log' for reading: No such file or directory This is getting highly frustrating. I tried to run the bin command with sudo, still no luck. What am I doing wrong? EDIT : I ran it in the foreground with bin/solr start -f and got this Exception in thread "main" java.lang.UnsupportedClassVersionError: org/eclipse/jetty/start/Main : Unsupported major.minor version 52.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:643) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:277) at java.net.URLClassLoader.access$000(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:212) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:323) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:296) at java.lang.ClassLoader.loadClass(ClassLoader.java:268) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:406) My java-foo is not up to par, so I have no idea what to make of this.
Cannot run solr (cannot open log file) Been trying to get solr running for a while now... finally seemed like I got it. Following this tutorial. Ran this command bin/solr start And saw this text Waiting up to 30 seconds to see Solr running on port 8983 But then... Still not seeing Solr listening on 8983 after 30 seconds! tail: cannot open `/root/downloads/solr-6.0.0/server/logs/solr.log' for reading: No such file or directory This is getting highly frustrating. I tried to run the bin command with sudo, still no luck. What am I doing wrong? EDIT : I ran it in the foreground with bin/solr start -f and got this Exception in thread "main" java.lang.UnsupportedClassVersionError: org/eclipse/jetty/start/Main : Unsupported major.minor version 52.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:643) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:277) at java.net.URLClassLoader.access$000(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:212) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:323) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:296) at java.lang.ClassLoader.loadClass(ClassLoader.java:268) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:406) My java-foo is not up to par, so I have no idea what to make of this.
java, solr, redhat, sudo, bin
4
12,790
6
https://stackoverflow.com/questions/37129114/cannot-run-solr-cannot-open-log-file
62,275,565
Unable to register with subscription-manager on redhat 7.4 - &#39;NoneType&#39; object has no attribute &#39;__getitem__&#39;
I am new to RedHat Linux and installed a 7.4 version on VirtualBox. According to the steps for RedHat installation, I need to first subscribe to RedHat for downloading on RedHat. The command used is subscription-manager register --username xxxxxxx --password xxxxxxx --auto-attach and the output is 'NoneType' object has no attribute ' getitem ' The user and password are correct in the red hat website. I had gone through RedHat Bugzilla tickets and solution provided by customer-support, but nothing worked for me. Please help me to resolve this issue.
Unable to register with subscription-manager on redhat 7.4 - &#39;NoneType&#39; object has no attribute &#39;__getitem__&#39; I am new to RedHat Linux and installed a 7.4 version on VirtualBox. According to the steps for RedHat installation, I need to first subscribe to RedHat for downloading on RedHat. The command used is subscription-manager register --username xxxxxxx --password xxxxxxx --auto-attach and the output is 'NoneType' object has no attribute ' getitem ' The user and password are correct in the red hat website. I had gone through RedHat Bugzilla tickets and solution provided by customer-support, but nothing worked for me. Please help me to resolve this issue.
redhat
4
6,685
2
https://stackoverflow.com/questions/62275565/unable-to-register-with-subscription-manager-on-redhat-7-4-nonetype-object-h
37,310,833
Installing xdebug on php 5.6 Amazon Linux AMI
I created an Elastic Beanstalk Environment ID_LIKE="rhel fedora" VERSION_ID="2016.03" PRETTY_NAME="Amazon Linux AMI 2016.03" ANSI_COLOR="0;33" CPE_NAME="cpe:/o:amazon:linux:2016.03:ga" HOME_URL="[URL] I'm trying to install xdebug using sudo yum install php-pecl-xdebug But I keep get the following error Loaded plugins: priorities, update-motd, upgrade-helper Resolving Dependencies --> Running transaction check ---> Package php-pecl-xdebug.x86_64 0:2.2.3-1.5.amzn1 will be installed --> Processing Dependency: php(api) = 20090626-x86-64 for package: php-pecl-xdebug-2.2.3-1.5.amzn1.x86_64 --> Processing Dependency: php(zend-abi) = 20090626-x86-64 for package: php-pecl-xdebug-2.2.3-1.5.amzn1.x86_64 --> Running transaction check ---> Package php-common.x86_64 0:5.3.29-1.8.amzn1 will be installed --> Processing Conflict: php56-common-5.6.21-1.124.amzn1.x86_64 conflicts php-common < 5.5.22-1.98 --> Finished Dependency Resolution Error: php56-common conflicts with php-common-5.3.29-1.8.amzn1.x86_64 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest What should I be using instead? And for reference how do I figure out which packages are available? Thanks alot.
Installing xdebug on php 5.6 Amazon Linux AMI I created an Elastic Beanstalk Environment ID_LIKE="rhel fedora" VERSION_ID="2016.03" PRETTY_NAME="Amazon Linux AMI 2016.03" ANSI_COLOR="0;33" CPE_NAME="cpe:/o:amazon:linux:2016.03:ga" HOME_URL="[URL] I'm trying to install xdebug using sudo yum install php-pecl-xdebug But I keep get the following error Loaded plugins: priorities, update-motd, upgrade-helper Resolving Dependencies --> Running transaction check ---> Package php-pecl-xdebug.x86_64 0:2.2.3-1.5.amzn1 will be installed --> Processing Dependency: php(api) = 20090626-x86-64 for package: php-pecl-xdebug-2.2.3-1.5.amzn1.x86_64 --> Processing Dependency: php(zend-abi) = 20090626-x86-64 for package: php-pecl-xdebug-2.2.3-1.5.amzn1.x86_64 --> Running transaction check ---> Package php-common.x86_64 0:5.3.29-1.8.amzn1 will be installed --> Processing Conflict: php56-common-5.6.21-1.124.amzn1.x86_64 conflicts php-common < 5.5.22-1.98 --> Finished Dependency Resolution Error: php56-common conflicts with php-common-5.3.29-1.8.amzn1.x86_64 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest What should I be using instead? And for reference how do I figure out which packages are available? Thanks alot.
linux, amazon-web-services, redhat, xdebug, fedora
4
3,452
1
https://stackoverflow.com/questions/37310833/installing-xdebug-on-php-5-6-amazon-linux-ami
16,575,680
rpmbuild failing error: Installed (but unpackaged) file(s) found:
I looked around but none of the answers to this same error message worked in my simple package... I am building the rpm using rpmbuild on Redhat ES 6 and no matter what I have done in my spec file I get the same results. Thank you in advance for your help. Here is my spec file: Name: package Version: 3.2.5 Release: redhat Summary: Company package gateway pos server Group: Engineering License: Company LLC - owned URL: [URL] Source: %{name}.tar.gz %description The Company package gateway server provides a key component in the Company system architecture which passes information between the clients and the API. %prep %setup -n %{name} %build %define debug_package %{nil} %install mkdir -p $RPM_BUILD_ROOT/srv/package/gateways/config mkdir -p $RPM_BUILD_ROOT/srv/package/gateways/logs install -m 700 gateway $RPM_BUILD_ROOT/srv/package/ install -m 700 gatewayclient.conf $RPM_BUILD_ROOT/srv/package/ install -m 700 gateway.conf $RPM_BUILD_ROOT/srv/package/ install -m 700 rules.conf $RPM_BUILD_ROOT/srv/package/ install -m 700 gatewaytest.conf $RPM_BUILD_ROOT/srv/package/ install -m 700 gateways/bci.exe $RPM_BUILD_ROOT/srv/package/gateways/ install -m 700 gateways/config/bci_iso8583.conf $RPM_BUILD_ROOT/srv/package/gateways/config/ %post %clean rm -rf %{buildroot} rm -rf $RPM_BUILD_ROOT rm -rf %{_tmppath/%{name} rm -rf %{_topdir}/BUILD%{name} %files -f %{name}.lang %defattr(-,root,root,-) /srv/ /srv/package/ /srv/package/gateways/ /srv/package/gateways/logs/ /srv/package/gateways/config/ /srv/package/gateway /srv/package/gatewayclient.conf /srv/package/gateway.conf /srv/package/gatewaytest.conf /srv/package/rules.conf /srv/package/gateways/bci.exe /srv/package/gateways/config/bci_iso8583.conf %changelog * Thurs May 09 2013 Owner - 1.0 r1 First release The error message is here: Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/rpmbuild/rpmbuild/BUILDROOT/package-3.2.5-redhat.x86_64 error: Installed (but unpackaged) file(s) found: /srv/package/gateways/bci.exe /srv/package/gateways/config/bci_iso8583.conf /srv/package/gateway /srv/package/gateway.conf /srv/package/gatewayclient.conf /srv/package/gatewaytest.conf /srv/package/rules.conf RPM build errors: Installed (but unpackaged) file(s) found: /srv/package/gateways/bci.exe /srv/package/gateways/config/bci_iso8583.conf /srv/package/gateway /srv/package/gateway.conf /srv/package/gatewayclient.conf /srv/package/gatewaytest.conf /srv/package/rules.conf Edition - Reran with suggestions below and got these results: Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/rpmbuild/rpmbuild/BUILDROOT/package-3.2.5-redhat.x86_64 error: Installed (but unpackaged) file(s) found: /srv/package/gateways/bci.exe /srv/package/gateways/config/bci_iso8583.conf /srv/package/gateway /srv/package/gateway.conf /srv/package/gatewayclient.conf /srv/package/gatewaytest.conf /srv/package/rules.conf RPM build errors: Installed (but unpackaged) file(s) found: /srv/package/gateways/bci.exe /srv/package/gateways/config/bci_iso8583.conf /srv/package/gateway /srv/package/gateway.conf /srv/package/gatewayclient.conf /srv/package/gatewaytest.conf /srv/package/rules.conf
rpmbuild failing error: Installed (but unpackaged) file(s) found: I looked around but none of the answers to this same error message worked in my simple package... I am building the rpm using rpmbuild on Redhat ES 6 and no matter what I have done in my spec file I get the same results. Thank you in advance for your help. Here is my spec file: Name: package Version: 3.2.5 Release: redhat Summary: Company package gateway pos server Group: Engineering License: Company LLC - owned URL: [URL] Source: %{name}.tar.gz %description The Company package gateway server provides a key component in the Company system architecture which passes information between the clients and the API. %prep %setup -n %{name} %build %define debug_package %{nil} %install mkdir -p $RPM_BUILD_ROOT/srv/package/gateways/config mkdir -p $RPM_BUILD_ROOT/srv/package/gateways/logs install -m 700 gateway $RPM_BUILD_ROOT/srv/package/ install -m 700 gatewayclient.conf $RPM_BUILD_ROOT/srv/package/ install -m 700 gateway.conf $RPM_BUILD_ROOT/srv/package/ install -m 700 rules.conf $RPM_BUILD_ROOT/srv/package/ install -m 700 gatewaytest.conf $RPM_BUILD_ROOT/srv/package/ install -m 700 gateways/bci.exe $RPM_BUILD_ROOT/srv/package/gateways/ install -m 700 gateways/config/bci_iso8583.conf $RPM_BUILD_ROOT/srv/package/gateways/config/ %post %clean rm -rf %{buildroot} rm -rf $RPM_BUILD_ROOT rm -rf %{_tmppath/%{name} rm -rf %{_topdir}/BUILD%{name} %files -f %{name}.lang %defattr(-,root,root,-) /srv/ /srv/package/ /srv/package/gateways/ /srv/package/gateways/logs/ /srv/package/gateways/config/ /srv/package/gateway /srv/package/gatewayclient.conf /srv/package/gateway.conf /srv/package/gatewaytest.conf /srv/package/rules.conf /srv/package/gateways/bci.exe /srv/package/gateways/config/bci_iso8583.conf %changelog * Thurs May 09 2013 Owner - 1.0 r1 First release The error message is here: Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/rpmbuild/rpmbuild/BUILDROOT/package-3.2.5-redhat.x86_64 error: Installed (but unpackaged) file(s) found: /srv/package/gateways/bci.exe /srv/package/gateways/config/bci_iso8583.conf /srv/package/gateway /srv/package/gateway.conf /srv/package/gatewayclient.conf /srv/package/gatewaytest.conf /srv/package/rules.conf RPM build errors: Installed (but unpackaged) file(s) found: /srv/package/gateways/bci.exe /srv/package/gateways/config/bci_iso8583.conf /srv/package/gateway /srv/package/gateway.conf /srv/package/gatewayclient.conf /srv/package/gatewaytest.conf /srv/package/rules.conf Edition - Reran with suggestions below and got these results: Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/rpmbuild/rpmbuild/BUILDROOT/package-3.2.5-redhat.x86_64 error: Installed (but unpackaged) file(s) found: /srv/package/gateways/bci.exe /srv/package/gateways/config/bci_iso8583.conf /srv/package/gateway /srv/package/gateway.conf /srv/package/gatewayclient.conf /srv/package/gatewaytest.conf /srv/package/rules.conf RPM build errors: Installed (but unpackaged) file(s) found: /srv/package/gateways/bci.exe /srv/package/gateways/config/bci_iso8583.conf /srv/package/gateway /srv/package/gateway.conf /srv/package/gatewayclient.conf /srv/package/gatewaytest.conf /srv/package/rules.conf
linux, package, redhat, rpm, rpmbuild
4
25,616
3
https://stackoverflow.com/questions/16575680/rpmbuild-failing-error-installed-but-unpackaged-files-found
5,820,778
can&#39;t run program in RedHat
I wrote some simple program int main(){ printf("hello word!"); return 0; } I compiled it using gcc -o hello hello.c (no errors) but when I run it in terminal using ./hello I see nothing, why? thanks in advance
can&#39;t run program in RedHat I wrote some simple program int main(){ printf("hello word!"); return 0; } I compiled it using gcc -o hello hello.c (no errors) but when I run it in terminal using ./hello I see nothing, why? thanks in advance
c, linux, redhat
4
531
6
https://stackoverflow.com/questions/5820778/cant-run-program-in-redhat
29,310,423
R redhat uninstall
I am trying to uninstall R in redhat 6. I was successfully able to install but in the course of trying to install some non-R packages I ended up deleting some directories that apparently contained R source files and now I can't remove R or reinstall it. When I try to run R I get this message: /usr/bin/R: line 236: /usr/lib64/R/etc/ldpaths: No such file or directory yum remove R gives this: Downloading Packages: Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Erasing : R-3.1.2-1.el6.x86_64 1/1 Verifying : R-3.1.2-1.el6.x86_64 1/1 Removed: R.x86_64 0:3.1.2-1.el6 But when I try to install R with yum install R I get: Downloading Packages: R-3.1.2-1.el6.x86_64.rpm | 23 kB 00:00 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : R-3.1.2-1.el6.x86_64 1/1 Verifying : R-3.1.2-1.el6.x86_64 1/1 Installed: R.x86_64 0:3.1.2-1.el6 But the same error is thrown when I try to open an R shell. Yum reinstall R also doesn't work. I'm guessing yum remove R isn't really removing it entirely, and the issue seems to be the missing ldpath file. Any help on how to resolve this and clear R from my machine entirely would be great. Thanks.
R redhat uninstall I am trying to uninstall R in redhat 6. I was successfully able to install but in the course of trying to install some non-R packages I ended up deleting some directories that apparently contained R source files and now I can't remove R or reinstall it. When I try to run R I get this message: /usr/bin/R: line 236: /usr/lib64/R/etc/ldpaths: No such file or directory yum remove R gives this: Downloading Packages: Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Erasing : R-3.1.2-1.el6.x86_64 1/1 Verifying : R-3.1.2-1.el6.x86_64 1/1 Removed: R.x86_64 0:3.1.2-1.el6 But when I try to install R with yum install R I get: Downloading Packages: R-3.1.2-1.el6.x86_64.rpm | 23 kB 00:00 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : R-3.1.2-1.el6.x86_64 1/1 Verifying : R-3.1.2-1.el6.x86_64 1/1 Installed: R.x86_64 0:3.1.2-1.el6 But the same error is thrown when I try to open an R shell. Yum reinstall R also doesn't work. I'm guessing yum remove R isn't really removing it entirely, and the issue seems to be the missing ldpath file. Any help on how to resolve this and clear R from my machine entirely would be great. Thanks.
linux, r, redhat, uninstallation, yum
4
10,772
4
https://stackoverflow.com/questions/29310423/r-redhat-uninstall
8,998,743
RHEL + PHP : writing files outside /var/www/html?
I'm trying to open a file for read/write. I've been developing on Ubuntu, and have had no problems whatsoever. Now it's time to deploy to the RHEL server, and I discover there seems to be some kind of restriction on the location of a file to be written. On RHEL, I can't open the file unless it's under /var/www/html. I can't figure out how to allow other locations. I need to manipulate files on a different volume, for disk space management reasons. The following is the bit of code that works fine on Ubuntu no matter what, but breaks on RHEL if the file is outside the web root: $repometa = fopen( "/path/to/file/it/does/exist/and/has/good/perms", "r+b"); The actual error is as follows, which is weird, because the permissions are just fine (owned by the "apache" user, with 0644 perms on file, 755 on dirs). fopen(<thefile>): failed to open stream: Permission denied Can someone point me to the documents that describe how to un-break RHEL's Apache/PHP config to allow writing to alternate locations on the file system? Thanks, ~ Paul
RHEL + PHP : writing files outside /var/www/html? I'm trying to open a file for read/write. I've been developing on Ubuntu, and have had no problems whatsoever. Now it's time to deploy to the RHEL server, and I discover there seems to be some kind of restriction on the location of a file to be written. On RHEL, I can't open the file unless it's under /var/www/html. I can't figure out how to allow other locations. I need to manipulate files on a different volume, for disk space management reasons. The following is the bit of code that works fine on Ubuntu no matter what, but breaks on RHEL if the file is outside the web root: $repometa = fopen( "/path/to/file/it/does/exist/and/has/good/perms", "r+b"); The actual error is as follows, which is weird, because the permissions are just fine (owned by the "apache" user, with 0644 perms on file, 755 on dirs). fopen(<thefile>): failed to open stream: Permission denied Can someone point me to the documents that describe how to un-break RHEL's Apache/PHP config to allow writing to alternate locations on the file system? Thanks, ~ Paul
php, apache, redhat, rhel
4
5,620
3
https://stackoverflow.com/questions/8998743/rhel-php-writing-files-outside-var-www-html
7,899,214
Shell script continues to run even after exit command
My shell script is as shown below: #!/bin/bash # Make sure only root can run our script [ $EUID -ne 0 ] && (echo "This script must be run as root" 1>&2) || (exit 1) # other script continues here... When I run above script with non-root user, it prints message "This script..." but it doe not exit there, it continues with the remaining script. What am I doing wrong? Note: I don't want to use if condition.
Shell script continues to run even after exit command My shell script is as shown below: #!/bin/bash # Make sure only root can run our script [ $EUID -ne 0 ] && (echo "This script must be run as root" 1>&2) || (exit 1) # other script continues here... When I run above script with non-root user, it prints message "This script..." but it doe not exit there, it continues with the remaining script. What am I doing wrong? Note: I don't want to use if condition.
linux, shell, redhat
4
6,326
3
https://stackoverflow.com/questions/7899214/shell-script-continues-to-run-even-after-exit-command
59,374,179
Password is expired just after user is added to FreeIPA?
I have set up a FreeIPA server. I am facing an issue which is password is expired when a user is first created. So a new user should always set his password when he logs in for the first time which is defined in here . but I don't want this feature. I am using this library to create or add user in FreeIPA. So, I connect with FreeIPA like this- private function getIPA() { $host = env('FREEIPA_HOST', 'cloud-host-ipa.com'); $certificate = database_path(env('FREEIPA_CERTIFICATE', 'ca.crt')); try { return new \FreeIPA\APIAccess\Main($host, $certificate); } catch (Exception $e) { throw new \ErrorException("Error {$e->getCode()}: {$e->getMessage()}"); return false; } } private function getIPAConnection() //Ged authinticated admin IPA connection { $ipa = $this->getIPA(); try { $auth = $ipa->connection()->authenticate(env('FREEIPA_ADMIN_NAME', 'oc-ipa-connector'), env('FREEIPA_ADMIN_PASS', 'ADMIN_PASS')); if ($auth) { return $ipa; } else { $auth_info = $ipa->connection()->getAuthenticationInfo(); $auth_info = implode(' ', $auth_info); throw new \ErrorException("\nLogin Failed : {$auth_info}"); //return false; } } catch (Exception $e) { throw new \ErrorException("\nError {$e->getCode()}: {$e->getMessage()}"); //return false; } } Then add a user like this- $ipa = $this->getIPAConnection(); try { $new_user_data = array( 'givenname' => $givenname, 'sn' => $sn, 'uid' => $uid, //'userpassword' => $_POST["userpassword"], 'mail' => $mail, 'mobile' => $phone ); $add_user = $ipa->user()->add($new_user_data); if ($add_user) { return true; } } catch (Exception $e) { throw new \ErrorException("Error {$e->getCode()}: {$e->getMessage()}"); return false; } This code works fine and user is added. Then I am setting password with this code- $ipa = $this->getIPAConnection(); try { $user_info = $ipa->user()->get($uid); if($user_info != false) { try { $new_user_data = array( 'userpassword' => $password, ); $mod_user = $ipa->user()->modify($uid, $new_user_data); if ($mod_user) { return true; } else { return false; } } catch (Exception $e) { throw new \ErrorException("Error {$e->getCode()}: {$e->getMessage()}"); } } } catch (Exception $e) { throw new \ErrorException("Error {$e->getCode()}: {$e->getMessage()}"); } Password is also set perfectly. But the set password is expired automatically just after it is set. I want my users to have this password for at least 1 week. So, I want to disable this feature. Is there any practical way? Re- I have created this issue in FreeIPA to provide us with a workaround, but the issue is closed and marked as - Closed: wontfix . So, I wonder if there exists a workaround?
Password is expired just after user is added to FreeIPA? I have set up a FreeIPA server. I am facing an issue which is password is expired when a user is first created. So a new user should always set his password when he logs in for the first time which is defined in here . but I don't want this feature. I am using this library to create or add user in FreeIPA. So, I connect with FreeIPA like this- private function getIPA() { $host = env('FREEIPA_HOST', 'cloud-host-ipa.com'); $certificate = database_path(env('FREEIPA_CERTIFICATE', 'ca.crt')); try { return new \FreeIPA\APIAccess\Main($host, $certificate); } catch (Exception $e) { throw new \ErrorException("Error {$e->getCode()}: {$e->getMessage()}"); return false; } } private function getIPAConnection() //Ged authinticated admin IPA connection { $ipa = $this->getIPA(); try { $auth = $ipa->connection()->authenticate(env('FREEIPA_ADMIN_NAME', 'oc-ipa-connector'), env('FREEIPA_ADMIN_PASS', 'ADMIN_PASS')); if ($auth) { return $ipa; } else { $auth_info = $ipa->connection()->getAuthenticationInfo(); $auth_info = implode(' ', $auth_info); throw new \ErrorException("\nLogin Failed : {$auth_info}"); //return false; } } catch (Exception $e) { throw new \ErrorException("\nError {$e->getCode()}: {$e->getMessage()}"); //return false; } } Then add a user like this- $ipa = $this->getIPAConnection(); try { $new_user_data = array( 'givenname' => $givenname, 'sn' => $sn, 'uid' => $uid, //'userpassword' => $_POST["userpassword"], 'mail' => $mail, 'mobile' => $phone ); $add_user = $ipa->user()->add($new_user_data); if ($add_user) { return true; } } catch (Exception $e) { throw new \ErrorException("Error {$e->getCode()}: {$e->getMessage()}"); return false; } This code works fine and user is added. Then I am setting password with this code- $ipa = $this->getIPAConnection(); try { $user_info = $ipa->user()->get($uid); if($user_info != false) { try { $new_user_data = array( 'userpassword' => $password, ); $mod_user = $ipa->user()->modify($uid, $new_user_data); if ($mod_user) { return true; } else { return false; } } catch (Exception $e) { throw new \ErrorException("Error {$e->getCode()}: {$e->getMessage()}"); } } } catch (Exception $e) { throw new \ErrorException("Error {$e->getCode()}: {$e->getMessage()}"); } Password is also set perfectly. But the set password is expired automatically just after it is set. I want my users to have this password for at least 1 week. So, I want to disable this feature. Is there any practical way? Re- I have created this issue in FreeIPA to provide us with a workaround, but the issue is closed and marked as - Closed: wontfix . So, I wonder if there exists a workaround?
linux, laravel, authentication, redhat, freeipa
4
6,874
2
https://stackoverflow.com/questions/59374179/password-is-expired-just-after-user-is-added-to-freeipa
39,827,753
/usr/bin/ld: attempted static link of dynamic object `/usr/lib64/libm.so&#39;
I'm not experienced at building with gcc at all and now require some help. I've a code that is being built with the following options gcc \ -g myCode.C \ -O \ -o myCode \ -I. \ -L. \ -L/usr/lib64 \ -lstdc++ \ -Wreturn-type \ -Wswitch \ -Wcomment \ -Wformat \ -Wchar-subscripts \ -Wparentheses \ -Wpointer-arith \ -Wcast-qual \ -Woverloaded-virtual \ -Wno-write-strings /usr/lib64/libm.so \ -Wno-deprecated When compiling myCode.C on redhat 6 machine it is not working on older versions of the OS throwing errors /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.9' not found /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.11' not found To fix this issue, I've tried to add -static build option to make all dynamic linking libraries as static, but have some build error which i dont understand :( /usr/bin/ld: attempted static link of dynamic object `/usr/lib64/libm.so' collect2: ld returned 1 exit status How do I make my code to work on older version of redhat rather than only on 6 and newer ?? what build options should I add/remove ?
/usr/bin/ld: attempted static link of dynamic object `/usr/lib64/libm.so&#39; I'm not experienced at building with gcc at all and now require some help. I've a code that is being built with the following options gcc \ -g myCode.C \ -O \ -o myCode \ -I. \ -L. \ -L/usr/lib64 \ -lstdc++ \ -Wreturn-type \ -Wswitch \ -Wcomment \ -Wformat \ -Wchar-subscripts \ -Wparentheses \ -Wpointer-arith \ -Wcast-qual \ -Woverloaded-virtual \ -Wno-write-strings /usr/lib64/libm.so \ -Wno-deprecated When compiling myCode.C on redhat 6 machine it is not working on older versions of the OS throwing errors /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.9' not found /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.11' not found To fix this issue, I've tried to add -static build option to make all dynamic linking libraries as static, but have some build error which i dont understand :( /usr/bin/ld: attempted static link of dynamic object `/usr/lib64/libm.so' collect2: ld returned 1 exit status How do I make my code to work on older version of redhat rather than only on 6 and newer ?? what build options should I add/remove ?
linux, gcc, build, redhat
4
25,403
1
https://stackoverflow.com/questions/39827753/usr-bin-ld-attempted-static-link-of-dynamic-object-usr-lib64-libm-so
28,352,758
Configure logrotate status file
I am trying to use logrotate to rotate my log files. However, we don't want to do that as root. However, if I execute it with some other job account it fails as its not able to edit or create file /var/lib/logrotate.status. Is there a way to configure log rotate to use a different status file.
Configure logrotate status file I am trying to use logrotate to rotate my log files. However, we don't want to do that as root. However, if I execute it with some other job account it fails as its not able to edit or create file /var/lib/logrotate.status. Is there a way to configure log rotate to use a different status file.
linux, redhat, logrotate
4
4,514
1
https://stackoverflow.com/questions/28352758/configure-logrotate-status-file
19,277,953
Install R 3+ on Redhat 6.3
I want to install R on my Red hat cluster which has the version below: $ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.3 (Santiago) When I went to R's homepage and this is what in their repository : I am wondering there is only redhat version 4 and 5 there and I don't know which version will best fit my operating system. Texinfo Problem Goes Here Since I have asked more than 6 questions today. Stackoverflow doesn't like me to ask more questions. So I will put the following questions into this question, sorry about that. Hi, I was trying to use Expect to automatically log into a remote server and install R. When I install R, they came up with all kinds of prompts asking 'The package will take xx MB Is that OK with you'? The command to install: su -c 'yum install R R-core R-core-devel R-devel' You need to type in Yes for a few times to finish the installation. My question is: Is there a flag for yum install that you can tell the machine to install everything I want you to install. Don't ask me. So I can install those four packages without any prompt. If that is hard to install in the 'quiet mode', how to write a while loop in Expect so it will send the Y automatically: Pseudo Code Not Working! send -- "sudo su -c yum install ...." while ("Expect '*Is it OK [Y/N]*'"){ send 'Y\r' # if (expect 'user$') {break} } Thanks a lot in advance.
Install R 3+ on Redhat 6.3 I want to install R on my Red hat cluster which has the version below: $ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.3 (Santiago) When I went to R's homepage and this is what in their repository : I am wondering there is only redhat version 4 and 5 there and I don't know which version will best fit my operating system. Texinfo Problem Goes Here Since I have asked more than 6 questions today. Stackoverflow doesn't like me to ask more questions. So I will put the following questions into this question, sorry about that. Hi, I was trying to use Expect to automatically log into a remote server and install R. When I install R, they came up with all kinds of prompts asking 'The package will take xx MB Is that OK with you'? The command to install: su -c 'yum install R R-core R-core-devel R-devel' You need to type in Yes for a few times to finish the installation. My question is: Is there a flag for yum install that you can tell the machine to install everything I want you to install. Don't ask me. So I can install those four packages without any prompt. If that is hard to install in the 'quiet mode', how to write a while loop in Expect so it will send the Y automatically: Pseudo Code Not Working! send -- "sudo su -c yum install ...." while ("Expect '*Is it OK [Y/N]*'"){ send 'Y\r' # if (expect 'user$') {break} } Thanks a lot in advance.
r, expect, redhat
4
5,698
1
https://stackoverflow.com/questions/19277953/install-r-3-on-redhat-6-3
13,179,717
Processes exceeding thread stack size limit on RedHat Enterprise Linux 6?
I have a couple of processes running on RHEL 6.3, but for some reason they are exceeding the thread stack sizes. For example, the Java process is given the stack size of -Xss256k at runtime on startup, and the C++ process is given a thread stack size of 1MB using pthread_attr_setstacksize() in the actual code. For some reason however, these processes are not sticking to these limits, and I'm not sure why. For example, when I run pmap -x <pid> for the C++ and Java process, I can see hundreds of 'anon' threads for each (which I have confirmed are the internal worker threads created by each of these processes), but these have an allocated value of 64MB each, not the limits set above: 00007fa4fc000000 168 40 40 rw--- [ anon ] 00007fa4fc02a000 65368 0 0 ----- [ anon ] 00007fa500000000 168 40 40 rw--- [ anon ] 00007fa50002a000 65368 0 0 ----- [ anon ] 00007fa504000000 168 40 40 rw--- [ anon ] 00007fa50402a000 65368 0 0 ----- [ anon ] 00007fa508000000 168 40 40 rw--- [ anon ] 00007fa50802a000 65368 0 0 ----- [ anon ] 00007fa50c000000 168 40 40 rw--- [ anon ] 00007fa50c02a000 65368 0 0 ----- [ anon ] 00007fa510000000 168 40 40 rw--- [ anon ] 00007fa51002a000 65368 0 0 ----- [ anon ] 00007fa514000000 168 40 40 rw--- [ anon ] 00007fa51402a000 65368 0 0 ----- [ anon ] 00007fa518000000 168 40 40 rw--- [ anon ] ... But when I run the following on the above process with all the 64MB 'anon' threads cat /proc/<pid>/limits | grep stack Max stack size 1048576 1048576 bytes it shows a max thread stack size of 1MB, so am a bit confused as to what is going on here. Also, the script that calls these programs sets 'ulimit -s 1024' as well. It should be noted that this only seems to occur when using a very high end machines (e.g. 48GB RAM, 24 CPU cores). The issue does not appear on less powerful machines (e.g. 4GB RAM, 2 CPU cores). Any help understanding what is happening here would be much appreciated.
Processes exceeding thread stack size limit on RedHat Enterprise Linux 6? I have a couple of processes running on RHEL 6.3, but for some reason they are exceeding the thread stack sizes. For example, the Java process is given the stack size of -Xss256k at runtime on startup, and the C++ process is given a thread stack size of 1MB using pthread_attr_setstacksize() in the actual code. For some reason however, these processes are not sticking to these limits, and I'm not sure why. For example, when I run pmap -x <pid> for the C++ and Java process, I can see hundreds of 'anon' threads for each (which I have confirmed are the internal worker threads created by each of these processes), but these have an allocated value of 64MB each, not the limits set above: 00007fa4fc000000 168 40 40 rw--- [ anon ] 00007fa4fc02a000 65368 0 0 ----- [ anon ] 00007fa500000000 168 40 40 rw--- [ anon ] 00007fa50002a000 65368 0 0 ----- [ anon ] 00007fa504000000 168 40 40 rw--- [ anon ] 00007fa50402a000 65368 0 0 ----- [ anon ] 00007fa508000000 168 40 40 rw--- [ anon ] 00007fa50802a000 65368 0 0 ----- [ anon ] 00007fa50c000000 168 40 40 rw--- [ anon ] 00007fa50c02a000 65368 0 0 ----- [ anon ] 00007fa510000000 168 40 40 rw--- [ anon ] 00007fa51002a000 65368 0 0 ----- [ anon ] 00007fa514000000 168 40 40 rw--- [ anon ] 00007fa51402a000 65368 0 0 ----- [ anon ] 00007fa518000000 168 40 40 rw--- [ anon ] ... But when I run the following on the above process with all the 64MB 'anon' threads cat /proc/<pid>/limits | grep stack Max stack size 1048576 1048576 bytes it shows a max thread stack size of 1MB, so am a bit confused as to what is going on here. Also, the script that calls these programs sets 'ulimit -s 1024' as well. It should be noted that this only seems to occur when using a very high end machines (e.g. 48GB RAM, 24 CPU cores). The issue does not appear on less powerful machines (e.g. 4GB RAM, 2 CPU cores). Any help understanding what is happening here would be much appreciated.
linux, linux-kernel, pthreads, redhat, ulimit
4
10,029
4
https://stackoverflow.com/questions/13179717/processes-exceeding-thread-stack-size-limit-on-redhat-enterprise-linux-6
64,050,339
wget: unable to resolve host address &#39;github.com&#39;
I am building my dockerfile using Redhat UBI image, and when I build the image I get the wget: unable to resolve host address'github.com'. I have tried adding a different URL that does not start with GitHub and that works. Not sure what the problem is. Below are the errors logs i get when i build the docker file with : wget: unable to resolve host address 'github.com' Step 11/25 : RUN set -ex; apk update; apk add -f acl dirmngr gpg lsof procps wget netcat gosu tini; rm -rf /var/lib/apt/lists/*; cd /usr/local/bin; wget -nv [URL] chmod 755 jattach; echo >jattach.sha512 "d8eedbb3e192a8596c08efedff99b9acf1075331e1747107c07cdb1718db2abe259ef168109e46bd4cf80d47d43028ff469f95e6ddcbdda4d7ffa73a20e852f9 jattach"; sha512sum -c jattach.sha512; rm jattach.sha512 ---> Running in 3ad58c40b25a + apk update fetch [URL] fetch [URL] v20200917-1125-g7274a98dfc [[URL] v20200917-1124-g01e8cb93ff [[URL] OK: 13174 distinct packages available + apk add -f acl dirmngr gpg lsof procps wget netcat gosu tini (1/12) Installing libacl (2.2.53-r0) (2/12) Installing acl (2.2.53-r0) (3/12) Installing lsof (4.93.2-r0) (4/12) Installing libintl (0.20.2-r0) (5/12) Installing ncurses-terminfo-base (6.2_p20200918-r1) (6/12) Installing ncurses-libs (6.2_p20200918-r1) (7/12) Installing libproc (3.3.16-r0) (8/12) Installing procps (3.3.16-r0) (9/12) Installing tini (0.19.0-r0) (10/12) Installing libunistring (0.9.10-r0) (11/12) Installing libidn2 (2.3.0-r0) (12/12) Installing wget (1.20.3-r1) Executing busybox-1.32.0-r3.trigger OK: 9 MiB in 26 packages + rm -rf '/var/lib/apt/lists/*' + cd /usr/local/bin + wget -nv [URL] wget: unable to resolve host address 'github.com' The command '/bin/sh -c set -ex; apk update; apk add -f acl dirmngr gpg lsof procps wget netcat gosu tini; rm -rf /var/lib/apt/lists/*; cd /usr/local/bin; wget -nv [URL] chmod 755 jattach; echo >jattach.sha512 "d8eedbb3e192a8596c08efedff99b9acf1075331e1747107c07cdb1718db2abe259ef168109e46bd4cf80d47d43028ff469f95e6ddcbdda4d7ffa73a20e852f9 jattach"; sha512sum -c jattach.sha512; rm jattach.sha512' returned a non-zero code: 4 Here is my docker file that I have which I build to create the image FROM alpine: edge as BUILD LABEL repository="[URL] ARG SOLR_VERSION="8.6.2" ARG SOLR_SHA512="0a43401ecf7946b2724da2d43896cd505386a8f9b07ddc60256cb586873e7e58610d2c34b1cf797323bf06c7613b109527a15105dc2a11be6f866531a1f2cef6" ARG SOLR_KEYS="E58A6F4D5B2B48AC66D5E53BD4F181881A42F9E6" # If specified, this will override SOLR_DOWNLOAD_SERVER and all ASF mirrors. Typically used downstream for custom builds ARG SOLR_DOWNLOAD_URL # Override the solr download location with e.g.: # docker build -t mine --build-arg SOLR_DOWNLOAD_SERVER=[URL] . ARG SOLR_DOWNLOAD_SERVER RUN set -ex; \ apk add --update; \ apk add -f install acl dirmngr gpg lsof procps wget netcat gosu tini; \ rm -rf /var/lib/apt/lists/*; \ cd /usr/local/bin; wget -nv [URL] chmod 755 jattach; \ echo >jattach.sha512 "d8eedbb3e192a8596c08efedff99b9acf1075331e1747107c07cdb1718db2abe259ef168109e46bd4cf80d47d43028ff469f95e6ddcbdda4d7ffa73a20e852f9 jattach"; \ sha512sum -c jattach.sha512; rm jattach.sha512
wget: unable to resolve host address &#39;github.com&#39; I am building my dockerfile using Redhat UBI image, and when I build the image I get the wget: unable to resolve host address'github.com'. I have tried adding a different URL that does not start with GitHub and that works. Not sure what the problem is. Below are the errors logs i get when i build the docker file with : wget: unable to resolve host address 'github.com' Step 11/25 : RUN set -ex; apk update; apk add -f acl dirmngr gpg lsof procps wget netcat gosu tini; rm -rf /var/lib/apt/lists/*; cd /usr/local/bin; wget -nv [URL] chmod 755 jattach; echo >jattach.sha512 "d8eedbb3e192a8596c08efedff99b9acf1075331e1747107c07cdb1718db2abe259ef168109e46bd4cf80d47d43028ff469f95e6ddcbdda4d7ffa73a20e852f9 jattach"; sha512sum -c jattach.sha512; rm jattach.sha512 ---> Running in 3ad58c40b25a + apk update fetch [URL] fetch [URL] v20200917-1125-g7274a98dfc [[URL] v20200917-1124-g01e8cb93ff [[URL] OK: 13174 distinct packages available + apk add -f acl dirmngr gpg lsof procps wget netcat gosu tini (1/12) Installing libacl (2.2.53-r0) (2/12) Installing acl (2.2.53-r0) (3/12) Installing lsof (4.93.2-r0) (4/12) Installing libintl (0.20.2-r0) (5/12) Installing ncurses-terminfo-base (6.2_p20200918-r1) (6/12) Installing ncurses-libs (6.2_p20200918-r1) (7/12) Installing libproc (3.3.16-r0) (8/12) Installing procps (3.3.16-r0) (9/12) Installing tini (0.19.0-r0) (10/12) Installing libunistring (0.9.10-r0) (11/12) Installing libidn2 (2.3.0-r0) (12/12) Installing wget (1.20.3-r1) Executing busybox-1.32.0-r3.trigger OK: 9 MiB in 26 packages + rm -rf '/var/lib/apt/lists/*' + cd /usr/local/bin + wget -nv [URL] wget: unable to resolve host address 'github.com' The command '/bin/sh -c set -ex; apk update; apk add -f acl dirmngr gpg lsof procps wget netcat gosu tini; rm -rf /var/lib/apt/lists/*; cd /usr/local/bin; wget -nv [URL] chmod 755 jattach; echo >jattach.sha512 "d8eedbb3e192a8596c08efedff99b9acf1075331e1747107c07cdb1718db2abe259ef168109e46bd4cf80d47d43028ff469f95e6ddcbdda4d7ffa73a20e852f9 jattach"; sha512sum -c jattach.sha512; rm jattach.sha512' returned a non-zero code: 4 Here is my docker file that I have which I build to create the image FROM alpine: edge as BUILD LABEL repository="[URL] ARG SOLR_VERSION="8.6.2" ARG SOLR_SHA512="0a43401ecf7946b2724da2d43896cd505386a8f9b07ddc60256cb586873e7e58610d2c34b1cf797323bf06c7613b109527a15105dc2a11be6f866531a1f2cef6" ARG SOLR_KEYS="E58A6F4D5B2B48AC66D5E53BD4F181881A42F9E6" # If specified, this will override SOLR_DOWNLOAD_SERVER and all ASF mirrors. Typically used downstream for custom builds ARG SOLR_DOWNLOAD_URL # Override the solr download location with e.g.: # docker build -t mine --build-arg SOLR_DOWNLOAD_SERVER=[URL] . ARG SOLR_DOWNLOAD_SERVER RUN set -ex; \ apk add --update; \ apk add -f install acl dirmngr gpg lsof procps wget netcat gosu tini; \ rm -rf /var/lib/apt/lists/*; \ cd /usr/local/bin; wget -nv [URL] chmod 755 jattach; \ echo >jattach.sha512 "d8eedbb3e192a8596c08efedff99b9acf1075331e1747107c07cdb1718db2abe259ef168109e46bd4cf80d47d43028ff469f95e6ddcbdda4d7ffa73a20e852f9 jattach"; \ sha512sum -c jattach.sha512; rm jattach.sha512
dockerfile, redhat, alpine-linux
4
23,554
2
https://stackoverflow.com/questions/64050339/wget-unable-to-resolve-host-address-github-com
57,811,879
Deploy keycloak custom spi deployment
I try to create a custom spi, in my keycloak project, following the basic keycloack structure, I add custom provider interface which extends provider, custom provider factory and implement custom spi for them as keycloak documentation says, and they do in their source code, after that i create a custom implementation for my provider and provider factory, i create the file in META-INF/services as documentation says, and I am using ear aproach to deploy like in beercloak example, but when I try to use my provider in code null pointer exception is thrown, this only happens when I try to add a custom spi, if I am implementing aprovider which has an existing keycloak spi it works, it also works if I am using the modules aproach, where i create a new module with jboss-cli, but that aproach seems hard to maintain, anyone has any ideas why this happens and how can I solve it or what is the best approach, thanks. 08:43:48,264 WARN [org.keycloak.services] (default task-1) KC-SERVICES0013: Failed authentication: java.lang.NullPointerException at sso.authentication.forms.RegistrationProfile.validate(RegistrationProfile.java:55) at org.keycloak.authentication.FormAuthenticationFlow.processAction(FormAuthenticationFlow.java:214) at org.keycloak.authentication.DefaultAuthenticationFlow.processAction(DefaultAuthenticationFlow.java:99) at org.keycloak.authentication.AuthenticationProcessor.authenticationAction(AuthenticationProcessor.java:873) at org.keycloak.services.resources.LoginActionsService.processFlow(LoginActionsService.java:296) at org.keycloak.services.resources.LoginActionsService.processRegistration(LoginActionsService.java:631) at org.keycloak.services.resources.LoginActionsService.registerRequest(LoginActionsService.java:685) at org.keycloak.services.resources.LoginActionsService.processRegister(LoginActionsService.java:665) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:138) at org.jboss.resteasy.core.ResourceMethodInvoker.internalInvokeOnTarget(ResourceMethodInvoker.java:517) at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTargetAfterFilter(ResourceMethodInvoker.java:406) at org.jboss.resteasy.core.ResourceMethodInvoker.lambda$invokeOnTarget$0(ResourceMethodInvoker.java:370) at org.jboss.resteasy.core.interception.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:355) at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:372) at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:344) at org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:137) at org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:100) at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:440) at org.jboss.resteasy.core.SynchronousDispatcher.lambda$invoke$4(SynchronousDispatcher.java:229) at org.jboss.resteasy.core.SynchronousDispatcher.lambda$preprocess$0(SynchronousDispatcher.java:135) at org.jboss.resteasy.core.interception.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:355) at org.jboss.resteasy.core.SynchronousDispatcher.preprocess(SynchronousDispatcher.java:138) at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:215) at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:227) at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56) at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51) at javax.servlet.http.HttpServlet.service(HttpServlet.java:791) at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129) at org.keycloak.services.filters.KeycloakSessionServletFilter.doFilter(KeycloakSessionServletFilter.java:90) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84) at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62) at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68) at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:132) at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46) at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64) at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60) at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77) at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50) at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at org.wildfly.extension.undertow.deployment.GlobalRequestControllerHandler.handleRequest(GlobalRequestControllerHandler.java:68) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292) at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81) at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138) at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135) at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48) at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43) at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105) at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1502) at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1502) at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1502) at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1502) at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272) at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81) at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104) at io.undertow.server.Connectors.executeRootHandler(Connectors.java:364) at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830) at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35) at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377) at java.lang.Thread.run(Thread.java:748)
Deploy keycloak custom spi deployment I try to create a custom spi, in my keycloak project, following the basic keycloack structure, I add custom provider interface which extends provider, custom provider factory and implement custom spi for them as keycloak documentation says, and they do in their source code, after that i create a custom implementation for my provider and provider factory, i create the file in META-INF/services as documentation says, and I am using ear aproach to deploy like in beercloak example, but when I try to use my provider in code null pointer exception is thrown, this only happens when I try to add a custom spi, if I am implementing aprovider which has an existing keycloak spi it works, it also works if I am using the modules aproach, where i create a new module with jboss-cli, but that aproach seems hard to maintain, anyone has any ideas why this happens and how can I solve it or what is the best approach, thanks. 08:43:48,264 WARN [org.keycloak.services] (default task-1) KC-SERVICES0013: Failed authentication: java.lang.NullPointerException at sso.authentication.forms.RegistrationProfile.validate(RegistrationProfile.java:55) at org.keycloak.authentication.FormAuthenticationFlow.processAction(FormAuthenticationFlow.java:214) at org.keycloak.authentication.DefaultAuthenticationFlow.processAction(DefaultAuthenticationFlow.java:99) at org.keycloak.authentication.AuthenticationProcessor.authenticationAction(AuthenticationProcessor.java:873) at org.keycloak.services.resources.LoginActionsService.processFlow(LoginActionsService.java:296) at org.keycloak.services.resources.LoginActionsService.processRegistration(LoginActionsService.java:631) at org.keycloak.services.resources.LoginActionsService.registerRequest(LoginActionsService.java:685) at org.keycloak.services.resources.LoginActionsService.processRegister(LoginActionsService.java:665) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:138) at org.jboss.resteasy.core.ResourceMethodInvoker.internalInvokeOnTarget(ResourceMethodInvoker.java:517) at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTargetAfterFilter(ResourceMethodInvoker.java:406) at org.jboss.resteasy.core.ResourceMethodInvoker.lambda$invokeOnTarget$0(ResourceMethodInvoker.java:370) at org.jboss.resteasy.core.interception.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:355) at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:372) at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:344) at org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:137) at org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:100) at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:440) at org.jboss.resteasy.core.SynchronousDispatcher.lambda$invoke$4(SynchronousDispatcher.java:229) at org.jboss.resteasy.core.SynchronousDispatcher.lambda$preprocess$0(SynchronousDispatcher.java:135) at org.jboss.resteasy.core.interception.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:355) at org.jboss.resteasy.core.SynchronousDispatcher.preprocess(SynchronousDispatcher.java:138) at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:215) at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:227) at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56) at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51) at javax.servlet.http.HttpServlet.service(HttpServlet.java:791) at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129) at org.keycloak.services.filters.KeycloakSessionServletFilter.doFilter(KeycloakSessionServletFilter.java:90) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84) at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62) at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68) at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:132) at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46) at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64) at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60) at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77) at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50) at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at org.wildfly.extension.undertow.deployment.GlobalRequestControllerHandler.handleRequest(GlobalRequestControllerHandler.java:68) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292) at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81) at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138) at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135) at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48) at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43) at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105) at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1502) at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1502) at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1502) at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1502) at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272) at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81) at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104) at io.undertow.server.Connectors.executeRootHandler(Connectors.java:364) at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830) at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35) at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377) at java.lang.Thread.run(Thread.java:748)
jboss, wildfly, redhat, keycloak, ear
4
4,191
1
https://stackoverflow.com/questions/57811879/deploy-keycloak-custom-spi-deployment
53,722,033
How to enable policy enforcing in keycloak for node.js application?
I have to integrate node.js application with keycloak.The application is in express.But the policies are not enforcing.It grants permission for all the users to access all the api. For /test api: Only users with 'chief' role has the access.I have given those policies in keycloak admin console.But those are not reflecting.Why? User without 'chief' role is also accessing /test app.js: 'use strict'; const Keycloak = require('keycloak-connect'); const express = require('express'); const session = require('express-session'); const expressHbs = require('express-handlebars'); const app = express(); app.engine('hbs', expressHbs({extname:'hbs', defaultLayout:'layout.hbs', relativeTo: __dirname})); app.set('view engine', 'hbs'); var memoryStore = new session.MemoryStore(); var keycloak = new Keycloak({ store: memoryStore }); app.use(session({ secret:'thisShouldBeLongAndSecret', resave: false, saveUninitialized: true, store: memoryStore })); app.use(keycloak.middleware()); app.get('/*', keycloak.protect('user'), function(req, res){ res.send("User has base permission"); }); app.get('/test', keycloak.protect(), function(req, res){ res.send("access granted"); }); app.get('/',function(req,res){ res.send("hello world"); }); app.use( keycloak.middleware( { logout: '/'} )); app.listen(3000, function () { console.log('Listening at [URL] }); keycloak.json: { "realm": "nodejs-example", "auth-server-url": "[URL] "ssl-required": "external", "resource": "nodejs-connect", "credentials": { "secret": "451317a2-09a1-48b8-b036-e578051687dd" }, "use-resource-role-mappings": true, "confidential-port": 0, "policy-enforcer": { "enforcement-mode":"PERMISSIVE", } }
How to enable policy enforcing in keycloak for node.js application? I have to integrate node.js application with keycloak.The application is in express.But the policies are not enforcing.It grants permission for all the users to access all the api. For /test api: Only users with 'chief' role has the access.I have given those policies in keycloak admin console.But those are not reflecting.Why? User without 'chief' role is also accessing /test app.js: 'use strict'; const Keycloak = require('keycloak-connect'); const express = require('express'); const session = require('express-session'); const expressHbs = require('express-handlebars'); const app = express(); app.engine('hbs', expressHbs({extname:'hbs', defaultLayout:'layout.hbs', relativeTo: __dirname})); app.set('view engine', 'hbs'); var memoryStore = new session.MemoryStore(); var keycloak = new Keycloak({ store: memoryStore }); app.use(session({ secret:'thisShouldBeLongAndSecret', resave: false, saveUninitialized: true, store: memoryStore })); app.use(keycloak.middleware()); app.get('/*', keycloak.protect('user'), function(req, res){ res.send("User has base permission"); }); app.get('/test', keycloak.protect(), function(req, res){ res.send("access granted"); }); app.get('/',function(req,res){ res.send("hello world"); }); app.use( keycloak.middleware( { logout: '/'} )); app.listen(3000, function () { console.log('Listening at [URL] }); keycloak.json: { "realm": "nodejs-example", "auth-server-url": "[URL] "ssl-required": "external", "resource": "nodejs-connect", "credentials": { "secret": "451317a2-09a1-48b8-b036-e578051687dd" }, "use-resource-role-mappings": true, "confidential-port": 0, "policy-enforcer": { "enforcement-mode":"PERMISSIVE", } }
node.js, redhat, keycloak
4
1,303
2
https://stackoverflow.com/questions/53722033/how-to-enable-policy-enforcing-in-keycloak-for-node-js-application
38,520,295
Prevent stop auditd service in Redhat 7
Curently, i want auditd service run forever and user can not stop this via any commands. Current my auditd service: ~]# systemctl cat auditd # /usr/lib/systemd/system/auditd.service [Unit] Description=Security Auditing Service DefaultDependencies=no After=local-fs.target systemd-tmpfiles-setup.service Conflicts=shutdown.target Before=sysinit.target shutdown.target RefuseManualStop=yes ConditionKernelCommandLine=!audit=0 [Service] ExecStart=/sbin/auditd -n ## To not use augenrules, copy this file to /etc/systemd/system/auditd.service ## and comment/delete the next line and uncomment the auditctl line. ## NOTE: augenrules expect any rules to be added to /etc/audit/rules.d/ ExecStartPost=-/sbin/augenrules --load #ExecStartPost=-/sbin/auditctl -R /etc/audit/audit.rules ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target # /etc/systemd/system/auditd.service.d/override.conf [Service] ExecReload= ExecReload=/bin/kill -HUP $MAINPID ; /sbin/augenrules --load I can't stop this service from command: # systemctl stop auditd.service Failed to stop auditd.service: Operation refused, unit auditd.service may be requested by dependency only. But when i using service auditd stop command. I can stop this service normally. # service auditd stop Stopping logging: [ OK ] How can i prevent it? Thanks
Prevent stop auditd service in Redhat 7 Curently, i want auditd service run forever and user can not stop this via any commands. Current my auditd service: ~]# systemctl cat auditd # /usr/lib/systemd/system/auditd.service [Unit] Description=Security Auditing Service DefaultDependencies=no After=local-fs.target systemd-tmpfiles-setup.service Conflicts=shutdown.target Before=sysinit.target shutdown.target RefuseManualStop=yes ConditionKernelCommandLine=!audit=0 [Service] ExecStart=/sbin/auditd -n ## To not use augenrules, copy this file to /etc/systemd/system/auditd.service ## and comment/delete the next line and uncomment the auditctl line. ## NOTE: augenrules expect any rules to be added to /etc/audit/rules.d/ ExecStartPost=-/sbin/augenrules --load #ExecStartPost=-/sbin/auditctl -R /etc/audit/audit.rules ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target # /etc/systemd/system/auditd.service.d/override.conf [Service] ExecReload= ExecReload=/bin/kill -HUP $MAINPID ; /sbin/augenrules --load I can't stop this service from command: # systemctl stop auditd.service Failed to stop auditd.service: Operation refused, unit auditd.service may be requested by dependency only. But when i using service auditd stop command. I can stop this service normally. # service auditd stop Stopping logging: [ OK ] How can i prevent it? Thanks
linux, service, redhat, systemd
4
17,174
2
https://stackoverflow.com/questions/38520295/prevent-stop-auditd-service-in-redhat-7
36,742,508
How to use sed command to delete lines without backup file?
I have large file with size of 130GB. # ls -lrth -rw-------. 1 root root 129G Apr 20 04:25 syslog.log So I need to reduce file size by deleting line which starts with "Nov 2" , So I have given the following command, sed -i '/Nov 2/d' syslog.log So I can't edit file using VIM editor also. When I trigger SED command , its creating backup file also. But I don't have much space in root. Please try to give alternate solution to delete particular line from this file without increasing space in server.
How to use sed command to delete lines without backup file? I have large file with size of 130GB. # ls -lrth -rw-------. 1 root root 129G Apr 20 04:25 syslog.log So I need to reduce file size by deleting line which starts with "Nov 2" , So I have given the following command, sed -i '/Nov 2/d' syslog.log So I can't edit file using VIM editor also. When I trigger SED command , its creating backup file also. But I don't have much space in root. Please try to give alternate solution to delete particular line from this file without increasing space in server.
linux, file, vim, sed, redhat
4
1,701
1
https://stackoverflow.com/questions/36742508/how-to-use-sed-command-to-delete-lines-without-backup-file
36,572,859
How to create two Aerospike Clusters on same L2 network
I am using two aerospike clusters(each with one node/machine only). Since both machine are on same LAN, they try to connect each other trying to form single cluster. Because of this I was getting error(while inserting record): Error: (11) AEROSPIKE_ERR_CLUSTER So on my ubuntu setup(one of two machines) I blocked port 9918 using cmd: ufw block 9918 After block cmd, aerospike clusters started working(I was able to insert record). Whats better way to avoid two Aerospike machines on same LAN to not communicate with each other ?
How to create two Aerospike Clusters on same L2 network I am using two aerospike clusters(each with one node/machine only). Since both machine are on same LAN, they try to connect each other trying to form single cluster. Because of this I was getting error(while inserting record): Error: (11) AEROSPIKE_ERR_CLUSTER So on my ubuntu setup(one of two machines) I blocked port 9918 using cmd: ufw block 9918 After block cmd, aerospike clusters started working(I was able to insert record). Whats better way to avoid two Aerospike machines on same LAN to not communicate with each other ?
redhat, iptables, subnet, aerospike, aerospike-loader
4
221
1
https://stackoverflow.com/questions/36572859/how-to-create-two-aerospike-clusters-on-same-l2-network
32,321,927
Offline pip installation
How can I install pip in an offline server? I have SSH access and can send files via scp. The server has Red Hat. I did this [URL] but tries to download something. Is there a way to package pip and all its dependencies, so then I can send that to the server and install it from there? I already did this with python packages that to install with pip.
Offline pip installation How can I install pip in an offline server? I have SSH access and can send files via scp. The server has Red Hat. I did this [URL] but tries to download something. Is there a way to package pip and all its dependencies, so then I can send that to the server and install it from there? I already did this with python packages that to install with pip.
python, pip, redhat
4
16,741
2
https://stackoverflow.com/questions/32321927/offline-pip-installation
27,801,188
Using SSH to grep keywords from multiple servers
I am completely new to scripting and am having some trouble piecing this together from some other online resources. What I want to do is run a bash script that will grep for a keyword domain in the /etc/hosts file on multiple servers. In the output file, I am looking for a list of the servers that contain this keyword but am not looking to make any changes. Simply looking for which machines have this value. Since there are a bunch of machines in question, listing the servers I am looking to search for won't work, but the machine I am doing this from does have SSH keys for all of the ones in question. I have a listing of the servers I want to query in three files on the machine (one for each environment) I am going to run this script from. Linux.prod.dat Linux.qa.dat Linux.dev.dat Each file is simply a list of server names in the environment. For example.. server1 server2 server3 etc... I am totally lost here and would appreciate any help.
Using SSH to grep keywords from multiple servers I am completely new to scripting and am having some trouble piecing this together from some other online resources. What I want to do is run a bash script that will grep for a keyword domain in the /etc/hosts file on multiple servers. In the output file, I am looking for a list of the servers that contain this keyword but am not looking to make any changes. Simply looking for which machines have this value. Since there are a bunch of machines in question, listing the servers I am looking to search for won't work, but the machine I am doing this from does have SSH keys for all of the ones in question. I have a listing of the servers I want to query in three files on the machine (one for each environment) I am going to run this script from. Linux.prod.dat Linux.qa.dat Linux.dev.dat Each file is simply a list of server names in the environment. For example.. server1 server2 server3 etc... I am totally lost here and would appreciate any help.
linux, bash, shell, redhat
4
6,634
2
https://stackoverflow.com/questions/27801188/using-ssh-to-grep-keywords-from-multiple-servers
22,998,667
Errno 14 PYCURL ERROR 22 The requested URL returned error 403 Forbidden
yum update [URL] [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhel6_64. Please verify its path and try again
Errno 14 PYCURL ERROR 22 The requested URL returned error 403 Forbidden yum update [URL] [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 403 Forbidden" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhel6_64. Please verify its path and try again
linux, redhat, yum
4
27,798
3
https://stackoverflow.com/questions/22998667/errno-14-pycurl-error-22-the-requested-url-returned-error-403-forbidden
64,262,624
Centos httpd php version is different from the php version command line
I'm on Centos7 i have enable multiple php versions with remi repo like : yum-config-manager --enable remi-php56 yum-config-manager --enable remi-php71 yum-config-manager --enable remi-php72 yum-config-manager --enable remi-php73 Then installed the packages : yum install php{version} yum install php{version}-php-{extension} I have setup PHP5.6 like : sudo update-alternatives --set php /usr/bin/php56 When i make php-v : PHP 5.6.40 (cli) (built: Sep 29 2020 11:38:05) But when i'm going to my phpinfo() file i get PHP Version 7.3.23 In centos/RH we can't make : sudo a2enmod php56 so i'm confusing and i don't know why httpd interpret version 7.3.23 How could i setup a PHP specific version to httpd?
Centos httpd php version is different from the php version command line I'm on Centos7 i have enable multiple php versions with remi repo like : yum-config-manager --enable remi-php56 yum-config-manager --enable remi-php71 yum-config-manager --enable remi-php72 yum-config-manager --enable remi-php73 Then installed the packages : yum install php{version} yum install php{version}-php-{extension} I have setup PHP5.6 like : sudo update-alternatives --set php /usr/bin/php56 When i make php-v : PHP 5.6.40 (cli) (built: Sep 29 2020 11:38:05) But when i'm going to my phpinfo() file i get PHP Version 7.3.23 In centos/RH we can't make : sudo a2enmod php56 so i'm confusing and i don't know why httpd interpret version 7.3.23 How could i setup a PHP specific version to httpd?
php, centos, redhat
4
2,092
1
https://stackoverflow.com/questions/64262624/centos-httpd-php-version-is-different-from-the-php-version-command-line
58,060,882
How does Red Hat&#39;s subscription-manager work?
The Red Hat subscription-manager is a tool to register, attach and remove subscriptions from the command line. If I understand correctly, this tool connects to the customer portal to retrieve certificates. These certificates are then used, among other things, to download yum packages from the Red Hat repo. Sources: [URL] [URL] There are several things that I don't understand: Why can't a user copy a certificate from one Red Hat machine to another and use it there? I assume the certificate includes machine-specific values (according to the docs, they are called "facts"), but then... How are the certificates loaded and checked by the other processes? For instance, I guess that yum must be using these certificates. But then the yum CLI tool must have been patched, right? Is the source code of these changes available? Is the source code of the subscription-manager tool available? That would clarify many things.
How does Red Hat&#39;s subscription-manager work? The Red Hat subscription-manager is a tool to register, attach and remove subscriptions from the command line. If I understand correctly, this tool connects to the customer portal to retrieve certificates. These certificates are then used, among other things, to download yum packages from the Red Hat repo. Sources: [URL] [URL] There are several things that I don't understand: Why can't a user copy a certificate from one Red Hat machine to another and use it there? I assume the certificate includes machine-specific values (according to the docs, they are called "facts"), but then... How are the certificates loaded and checked by the other processes? For instance, I guess that yum must be using these certificates. But then the yum CLI tool must have been patched, right? Is the source code of these changes available? Is the source code of the subscription-manager tool available? That would clarify many things.
redhat
4
5,549
2
https://stackoverflow.com/questions/58060882/how-does-red-hats-subscription-manager-work
53,062,520
How to expose Drools rules via REST
I'm trying out Redhat Drools and I was able to deploy Drools Workbench in WildFly environment. And I'm trying to find out how to expose rules as services, but couldn't find an article on how to do it. Is it a restriction on the Drools Workbench or is there another way where this could be achieved?
How to expose Drools rules via REST I'm trying out Redhat Drools and I was able to deploy Drools Workbench in WildFly environment. And I'm trying to find out how to expose rules as services, but couldn't find an article on how to do it. Is it a restriction on the Drools Workbench or is there another way where this could be achieved?
drools, redhat, redhat-brms
4
5,231
2
https://stackoverflow.com/questions/53062520/how-to-expose-drools-rules-via-rest
52,791,806
How to get the email-verification link inside my custom SPI in keycloak
I have my code below, this is inside my notification-spi project, which get triggered when a new user is created. I am able to receive the email. However i don't know how i can get the email-verification link when RequiredActions verify-email is selected by the admin who created the account in keycloak admin ui. public void onEvent(AdminEvent adminEvent, boolean includeRepresentation) { EmailSenderProvider emailSender = session.getProvider(EmailSenderProvider.class); RealmModel realm = session.realms().getRealm(adminEvent.getRealmId()); UserModel user = session.userCache().getUserById(adminEvent.getAuthDetails().getUserId(), realm); if (OperationType.CREATE.equals(adminEvent.getOperationType())) { LOGGER.info("OPERATION CREATE USER"); LOGGER.info("Representation : " + adminEvent.getRepresentation()); try { LOGGER.info("Sending email..."); emailSender.send(realm.getSmtpConfig(), user, "Account Enrollment", "A new account has been created using your email.", "<h1>Account Enrollment</h1> <br/>" + "<p>A new account has been created using your email</p>"); LOGGER.info("Email has been sent."); } catch (EmailException e) { LOGGER.info(e.getMessage()); } } } } Any help is appreciated.
How to get the email-verification link inside my custom SPI in keycloak I have my code below, this is inside my notification-spi project, which get triggered when a new user is created. I am able to receive the email. However i don't know how i can get the email-verification link when RequiredActions verify-email is selected by the admin who created the account in keycloak admin ui. public void onEvent(AdminEvent adminEvent, boolean includeRepresentation) { EmailSenderProvider emailSender = session.getProvider(EmailSenderProvider.class); RealmModel realm = session.realms().getRealm(adminEvent.getRealmId()); UserModel user = session.userCache().getUserById(adminEvent.getAuthDetails().getUserId(), realm); if (OperationType.CREATE.equals(adminEvent.getOperationType())) { LOGGER.info("OPERATION CREATE USER"); LOGGER.info("Representation : " + adminEvent.getRepresentation()); try { LOGGER.info("Sending email..."); emailSender.send(realm.getSmtpConfig(), user, "Account Enrollment", "A new account has been created using your email.", "<h1>Account Enrollment</h1> <br/>" + "<p>A new account has been created using your email</p>"); LOGGER.info("Email has been sent."); } catch (EmailException e) { LOGGER.info(e.getMessage()); } } } } Any help is appreciated.
java, jboss, redhat, keycloak, keycloak-services
4
2,855
1
https://stackoverflow.com/questions/52791806/how-to-get-the-email-verification-link-inside-my-custom-spi-in-keycloak
51,269,194
Unable to install plugins
When I try to install any plugin on my Jenkins server, I get below error in the browser. I can see the list of Available plugins. Error occurs when I select any plugin and click on Install without restart or Download now and install after restart . Error in Chrome: This page isn’t working xxxxxxx didn’t send any data. ERR_EMPTY_RESPONSE xxxxxxx represents server name. Error in Firefox: The connection was reset The connection to the server was reset while the page was loading. The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer’s network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. URL shown the browser address bar: [URL] What am I missing here? My environment: Server OS: RHEL7 Jenkins version: Jenkins ver. 2.121.1 JDK version: OpenJDK Runtime Environment (build 1.8.0_171-b10) OpenJDK 64-Bit Server VM (build 25.171-b10, mixed mode)
Unable to install plugins When I try to install any plugin on my Jenkins server, I get below error in the browser. I can see the list of Available plugins. Error occurs when I select any plugin and click on Install without restart or Download now and install after restart . Error in Chrome: This page isn’t working xxxxxxx didn’t send any data. ERR_EMPTY_RESPONSE xxxxxxx represents server name. Error in Firefox: The connection was reset The connection to the server was reset while the page was loading. The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer’s network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. URL shown the browser address bar: [URL] What am I missing here? My environment: Server OS: RHEL7 Jenkins version: Jenkins ver. 2.121.1 JDK version: OpenJDK Runtime Environment (build 1.8.0_171-b10) OpenJDK 64-Bit Server VM (build 25.171-b10, mixed mode)
jenkins, jenkins-plugins, redhat, rhel7
4
624
1
https://stackoverflow.com/questions/51269194/unable-to-install-plugins
41,101,252
I lost my php-fpm.sock file from / var / run / php-fpm /
I installed PHP 7 on Red Hat Linux server, but apparently due to running a few commands on the server to configure PHP I have the lost the php-fpm.sock file. Could anyone please assist me with contents of the file?
I lost my php-fpm.sock file from / var / run / php-fpm / I installed PHP 7 on Red Hat Linux server, but apparently due to running a few commands on the server to configure PHP I have the lost the php-fpm.sock file. Could anyone please assist me with contents of the file?
php, linux, redhat, php-7
4
11,100
1
https://stackoverflow.com/questions/41101252/i-lost-my-php-fpm-sock-file-from-var-run-php-fpm
34,668,144
How to install bazel and tensorflow on Red Hat 6.7
I would like to install bazel from source, and use bazel to compile tensorflow on a cluster running redhat 6.7. When I try to install bazel, the glibc version (2.12) is too old. I do not have root access to the cluster. Is it possible to install tensorflow in this case? My system information: -bash-4.1$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.7 (Santiago) -bash-4.1$ which gcc /usr/bin/gcc -bash-4.1$ gcc -v Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=[URL] --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) -bash-4.1$ ldd --version ldd (GNU libc) 2.12 The system has newer gcc installed as well. I tried using it, bazel still won't compile. -bash-4.1$ /usr/local/gcc/4.8.4/bin/gcc -v Using built-in specs. COLLECT_GCC=/usr/local/gcc/4.8.4/bin/gcc COLLECT_LTO_WRAPPER=/usr/local/gcc/4.8.4/libexec/gcc/x86_64-unknown-linux-gnu/4.8.4/lto-wrapper Target: x86_64-unknown-linux-gnu Configured with: ../configure --prefix=/usr/local/gcc/4.8.4 Thread model: posix gcc version 4.8.4 (GCC) When I was compiling bazel, I got the following error: bazel-0.1.1/_bin/build-runfiles: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.14' not found Some people also reported this issue: [URL] and [URL] How can I install the missing dependency locally, and have bazel pick up the right library?
How to install bazel and tensorflow on Red Hat 6.7 I would like to install bazel from source, and use bazel to compile tensorflow on a cluster running redhat 6.7. When I try to install bazel, the glibc version (2.12) is too old. I do not have root access to the cluster. Is it possible to install tensorflow in this case? My system information: -bash-4.1$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.7 (Santiago) -bash-4.1$ which gcc /usr/bin/gcc -bash-4.1$ gcc -v Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=[URL] --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) -bash-4.1$ ldd --version ldd (GNU libc) 2.12 The system has newer gcc installed as well. I tried using it, bazel still won't compile. -bash-4.1$ /usr/local/gcc/4.8.4/bin/gcc -v Using built-in specs. COLLECT_GCC=/usr/local/gcc/4.8.4/bin/gcc COLLECT_LTO_WRAPPER=/usr/local/gcc/4.8.4/libexec/gcc/x86_64-unknown-linux-gnu/4.8.4/lto-wrapper Target: x86_64-unknown-linux-gnu Configured with: ../configure --prefix=/usr/local/gcc/4.8.4 Thread model: posix gcc version 4.8.4 (GCC) When I was compiling bazel, I got the following error: bazel-0.1.1/_bin/build-runfiles: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.14' not found Some people also reported this issue: [URL] and [URL] How can I install the missing dependency locally, and have bazel pick up the right library?
compilation, redhat, glibc, tensorflow, bazel
4
6,905
2
https://stackoverflow.com/questions/34668144/how-to-install-bazel-and-tensorflow-on-red-hat-6-7
30,161,222
How do I uninstall all rpms installed today with yum?
I am very familiar with rpm -qa --last and have found it to be very handy on certain occasions. However on this occasion I accidentally got a bit overzealous and installed a large yum group. yum groupinstall "Development tools" Is there an easy way to uninstall everything I just installed? Seems to me there should be some way to combine rpm query and rpm erase. i.e. piping the output from a query command into the remove command. Update: based on user @rickhg12hs feedback It was pointed out that I can see the transaction id with yum history which I did not know about. Here is what that looks like: $ yum history Loaded plugins: fastestmirror, security ID | Login user | Date and time | Action(s) | Altered ---------------------------------------------------------------------------- 69 | <jds> | 2015-05-11 01:31 | Install | 1 68 | <jds> | 2015-05-11 01:31 | Install | 1 67 | <jds> | 2015-05-11 01:10 | I, U | 210 66 | <jds> | 2015-05-05 12:41 | Install | 1 65 | <jds> | 2015-04-30 17:57 | Install | 2 64 | <ansible> | 2015-04-30 10:11 | Install | 1 63 | <ansible> | 2015-04-30 10:11 | Install | 1 62 | <ansible> | 2015-04-30 10:11 | Install | 1 EE 61 | <ansible> | 2015-04-30 10:11 | Install | 1 60 | <ansible> | 2015-04-30 10:11 | Install | 1 59 | <ansible> | 2015-04-30 09:58 | Install | 19 P< 58 | <ansible> | 2015-04-29 18:28 | Install | 1 > 57 | <ansible> | 2015-04-29 18:28 | Install | 1 56 | <ansible> | 2015-04-29 18:28 | Install | 9 55 | <ansible> | 2015-04-29 18:28 | Install | 3 54 | <ansible> | 2015-04-29 18:28 | Install | 1 53 | <ansible> | 2015-04-29 18:27 | I, U | 5 52 | <ansible> | 2015-04-29 18:27 | I, U | 4 51 | <ansible> | 2015-04-29 18:27 | Install | 1 50 | <ansible> | 2015-04-29 18:27 | Install | 1 and tada: There it is, a transaction id. I want to uninstall from transaction id 67. So now that I am a bit wiser I have a new question. So how can I use the yum or rpm command to uninstall a transaction? Note: it was also pointed out to me that I can do a $ yum history info 67 |less Loaded plugins: fastestmirror, security Transaction ID : 67 Begin time : Mon May 11 01:10:09 2015 Begin rpmdb : 1012:bb05598315dcb21812b038a356fa06333d277cde End time : 01:13:25 2015 (196 seconds) End rpmdb : 1174:cb7855e82c7bff545319c38b01a72a48f3ada1ab User : <jds> Return-Code : Success Command Line : groupinstall Additional Development Transaction performed with: Installed rpm-4.8.0-38.el6_6.x86_64 @updates Installed yum-3.2.29-60.el6.centos.noarch @anaconda-CentOS-201410241409.x86_64/6.6 Installed yum-plugin-fastestmirror-1.1.30-30.el6.noarch @anaconda-CentOS-201410241409.x86_64/6.6 Packages Altered: Dep-Install GConf2-2.28.0-6.el6.x86_64 @base Install GConf2-devel-2.28.0-6.el6.x86_64 @base Dep-Install ORBit2-2.14.17-5.el6.x86_64 @base ... snip ... I think this could prove quite helpful under certain circumstances.
How do I uninstall all rpms installed today with yum? I am very familiar with rpm -qa --last and have found it to be very handy on certain occasions. However on this occasion I accidentally got a bit overzealous and installed a large yum group. yum groupinstall "Development tools" Is there an easy way to uninstall everything I just installed? Seems to me there should be some way to combine rpm query and rpm erase. i.e. piping the output from a query command into the remove command. Update: based on user @rickhg12hs feedback It was pointed out that I can see the transaction id with yum history which I did not know about. Here is what that looks like: $ yum history Loaded plugins: fastestmirror, security ID | Login user | Date and time | Action(s) | Altered ---------------------------------------------------------------------------- 69 | <jds> | 2015-05-11 01:31 | Install | 1 68 | <jds> | 2015-05-11 01:31 | Install | 1 67 | <jds> | 2015-05-11 01:10 | I, U | 210 66 | <jds> | 2015-05-05 12:41 | Install | 1 65 | <jds> | 2015-04-30 17:57 | Install | 2 64 | <ansible> | 2015-04-30 10:11 | Install | 1 63 | <ansible> | 2015-04-30 10:11 | Install | 1 62 | <ansible> | 2015-04-30 10:11 | Install | 1 EE 61 | <ansible> | 2015-04-30 10:11 | Install | 1 60 | <ansible> | 2015-04-30 10:11 | Install | 1 59 | <ansible> | 2015-04-30 09:58 | Install | 19 P< 58 | <ansible> | 2015-04-29 18:28 | Install | 1 > 57 | <ansible> | 2015-04-29 18:28 | Install | 1 56 | <ansible> | 2015-04-29 18:28 | Install | 9 55 | <ansible> | 2015-04-29 18:28 | Install | 3 54 | <ansible> | 2015-04-29 18:28 | Install | 1 53 | <ansible> | 2015-04-29 18:27 | I, U | 5 52 | <ansible> | 2015-04-29 18:27 | I, U | 4 51 | <ansible> | 2015-04-29 18:27 | Install | 1 50 | <ansible> | 2015-04-29 18:27 | Install | 1 and tada: There it is, a transaction id. I want to uninstall from transaction id 67. So now that I am a bit wiser I have a new question. So how can I use the yum or rpm command to uninstall a transaction? Note: it was also pointed out to me that I can do a $ yum history info 67 |less Loaded plugins: fastestmirror, security Transaction ID : 67 Begin time : Mon May 11 01:10:09 2015 Begin rpmdb : 1012:bb05598315dcb21812b038a356fa06333d277cde End time : 01:13:25 2015 (196 seconds) End rpmdb : 1174:cb7855e82c7bff545319c38b01a72a48f3ada1ab User : <jds> Return-Code : Success Command Line : groupinstall Additional Development Transaction performed with: Installed rpm-4.8.0-38.el6_6.x86_64 @updates Installed yum-3.2.29-60.el6.centos.noarch @anaconda-CentOS-201410241409.x86_64/6.6 Installed yum-plugin-fastestmirror-1.1.30-30.el6.noarch @anaconda-CentOS-201410241409.x86_64/6.6 Packages Altered: Dep-Install GConf2-2.28.0-6.el6.x86_64 @base Install GConf2-devel-2.28.0-6.el6.x86_64 @base Dep-Install ORBit2-2.14.17-5.el6.x86_64 @base ... snip ... I think this could prove quite helpful under certain circumstances.
centos, fedora, redhat, rpm, yum
4
4,736
3
https://stackoverflow.com/questions/30161222/how-do-i-uninstall-all-rpms-installed-today-with-yum
29,709,020
add a new file in existing RPM
I am modifying gnome-shell-3.8.xx.rpm package. I have created several patches for rpm and they are working fine. Now I want to add new source file in rpm but I am not able to find how to do it? For patches I have followed below approach: Download source rpm. install rpm which creates BUILD, BUILDROOT, RPMS, SOURCES SPECES SRPMS directories. copy my patches in SOURCES directory. Modify the SPEC file to include my patches Create new package with rpmbuild -bb SPEC/spec_file command.
add a new file in existing RPM I am modifying gnome-shell-3.8.xx.rpm package. I have created several patches for rpm and they are working fine. Now I want to add new source file in rpm but I am not able to find how to do it? For patches I have followed below approach: Download source rpm. install rpm which creates BUILD, BUILDROOT, RPMS, SOURCES SPECES SRPMS directories. copy my patches in SOURCES directory. Modify the SPEC file to include my patches Create new package with rpmbuild -bb SPEC/spec_file command.
linux, redhat, rpm, rpmbuild, rpm-spec
4
6,320
1
https://stackoverflow.com/questions/29709020/add-a-new-file-in-existing-rpm
26,734,591
Python script won&#39;t run on keyboard shortcut
So i have plenty of scripts which i run from keyboard shortcuts, things like uploading screenshots to imgur and putting links in the clipboard, stuff for digitising plots, etc. I have this current script, which only runs from the terminal, and not when i try to run it as a keyboard shortcut. I'm trying to run it via the System > Preferences > Keyboard Shortcuts on Scientific linux 6.4 . I've included the script below, in case there's anything specific about it which would stop it from working. #!/usr/bin/python import fileinput, os import subprocess from pygments import highlight from pygments.lexers import get_lexer_by_name, guess_lexer import pygments.formatters as formatters #stdin = "\n".join([line for line in fileinput.input()]) p = subprocess.Popen(["xclip", "-selection", "primary", "-o"], stdout=subprocess.PIPE) code, err = p.communicate() if not err: lexer = guess_lexer(code) print lexer.name imageformatter = formatters.ImageFormatter(linenos=True, cssclass="source", font_name="Liberation Mono") formatter = formatters.HtmlFormatter(linenos=True, cssclass="source") HTMLresult = highlight(code, lexer, formatter) Jpgresult = highlight(code, lexer, imageformatter, outfile=open("syntax.png", "w")) with open("syntax.html", "w") as f: f.write("<html><head><style media='screen' type='text/css'>") f.write(formatters.HtmlFormatter().get_style_defs('.source')) f.write("</style></head><body>") f.write(HTMLresult) f.write("</body></html>") # os.system("pdflatex syntax.tex") os.system("firefox syntax.html") os.system("uploadImage.sh syntax.png") else: print err The way it works, is by extracting the clipboard selection using xclip , using pygments on the text, and then both creating an html document and opening it in firefox, and uploading an image to imgur (using another script i have, which i know 100% works), and then putting that image url back into the clipboard. The bin folder it resides in is in my path. I've tried all of: script script.sh (where this file is just a shell script which calls the python script) /home/will/bin/script /home/will/bin/script.sh as the command in the keyboard preferences . If i change the contents of these scripts to just something like notify-send "hello" , and that then produces the notification message, so i'm fairlyconfident it's a probelm with the script, and not the keyboard shortcuts menu.
Python script won&#39;t run on keyboard shortcut So i have plenty of scripts which i run from keyboard shortcuts, things like uploading screenshots to imgur and putting links in the clipboard, stuff for digitising plots, etc. I have this current script, which only runs from the terminal, and not when i try to run it as a keyboard shortcut. I'm trying to run it via the System > Preferences > Keyboard Shortcuts on Scientific linux 6.4 . I've included the script below, in case there's anything specific about it which would stop it from working. #!/usr/bin/python import fileinput, os import subprocess from pygments import highlight from pygments.lexers import get_lexer_by_name, guess_lexer import pygments.formatters as formatters #stdin = "\n".join([line for line in fileinput.input()]) p = subprocess.Popen(["xclip", "-selection", "primary", "-o"], stdout=subprocess.PIPE) code, err = p.communicate() if not err: lexer = guess_lexer(code) print lexer.name imageformatter = formatters.ImageFormatter(linenos=True, cssclass="source", font_name="Liberation Mono") formatter = formatters.HtmlFormatter(linenos=True, cssclass="source") HTMLresult = highlight(code, lexer, formatter) Jpgresult = highlight(code, lexer, imageformatter, outfile=open("syntax.png", "w")) with open("syntax.html", "w") as f: f.write("<html><head><style media='screen' type='text/css'>") f.write(formatters.HtmlFormatter().get_style_defs('.source')) f.write("</style></head><body>") f.write(HTMLresult) f.write("</body></html>") # os.system("pdflatex syntax.tex") os.system("firefox syntax.html") os.system("uploadImage.sh syntax.png") else: print err The way it works, is by extracting the clipboard selection using xclip , using pygments on the text, and then both creating an html document and opening it in firefox, and uploading an image to imgur (using another script i have, which i know 100% works), and then putting that image url back into the clipboard. The bin folder it resides in is in my path. I've tried all of: script script.sh (where this file is just a shell script which calls the python script) /home/will/bin/script /home/will/bin/script.sh as the command in the keyboard preferences . If i change the contents of these scripts to just something like notify-send "hello" , and that then produces the notification message, so i'm fairlyconfident it's a probelm with the script, and not the keyboard shortcuts menu.
python, linux, bash, keyboard-shortcuts, redhat
4
3,641
4
https://stackoverflow.com/questions/26734591/python-script-wont-run-on-keyboard-shortcut
9,985,030
How does the garbage collector work in PHP
I have a PHP script that has a large array of people, it grabs their details from an external resource via SOAP, modifies the data and sends it back. Due to the size of the details I upped PHP's memory to 128MB. After about 4 hours of running (It will probably take 4 days to run) it ran out of memory. Heres the basics of what it does: $people = getPeople(); foreach ($people as $person) { $data = get_personal_data(); if ($data == "blah") { importToPerson("blah", $person); } else { importToPerson("else", $person); } } After it ran out of memory and crashed I decided to initialise $data before the foreach loop and according to top , memory usage for the process hasn't risen above 7.8% and it's been running for 12 hours. So my question is, does PHP not run a garbage collector on variables initialised inside the loop even if reused? Is the system reclaiming the memory and PHP hasn't marked it as usable yet and will eventually crash again (I've upped it to 256MB now so I've changed 2 things and not sure which has fixed it, I could probably change my script back to answer this but don't want to wait another 12 hours for it to crash to figure out)? I'm not using the Zend framework so the other question like this I don't think is relevant. EDIT: I don't actually have an issue with the script or what it's doing. At the moment, as far as all system reporting is concerned I don't have any issues. This question is about the garbage collector and how / when it reclaims resources in a foreach loop and / or how the system reports on memory usage of a php process.
How does the garbage collector work in PHP I have a PHP script that has a large array of people, it grabs their details from an external resource via SOAP, modifies the data and sends it back. Due to the size of the details I upped PHP's memory to 128MB. After about 4 hours of running (It will probably take 4 days to run) it ran out of memory. Heres the basics of what it does: $people = getPeople(); foreach ($people as $person) { $data = get_personal_data(); if ($data == "blah") { importToPerson("blah", $person); } else { importToPerson("else", $person); } } After it ran out of memory and crashed I decided to initialise $data before the foreach loop and according to top , memory usage for the process hasn't risen above 7.8% and it's been running for 12 hours. So my question is, does PHP not run a garbage collector on variables initialised inside the loop even if reused? Is the system reclaiming the memory and PHP hasn't marked it as usable yet and will eventually crash again (I've upped it to 256MB now so I've changed 2 things and not sure which has fixed it, I could probably change my script back to answer this but don't want to wait another 12 hours for it to crash to figure out)? I'm not using the Zend framework so the other question like this I don't think is relevant. EDIT: I don't actually have an issue with the script or what it's doing. At the moment, as far as all system reporting is concerned I don't have any issues. This question is about the garbage collector and how / when it reclaims resources in a foreach loop and / or how the system reports on memory usage of a php process.
php, garbage-collection, foreach, redhat
4
2,520
2
https://stackoverflow.com/questions/9985030/how-does-the-garbage-collector-work-in-php
5,009,450
Memcached not showing up in phpinfo()
I've installed libmemcached and memcached pecl extension for php and for some reason it's not installing correctly? i've got memcached.so in /usr/lib64/php/ with the right permissions and libmemcache.so in /usr/local/lib/ Everything seemed to build correctly without error, and I restarted apache? i also have the daemon installed. I somehow easily got the Memcache class easily installed for php before, but I realized what i wanted was the Memcached (note the d) class. let me know if more info is needed! EDIT: I previously had memcache (without the d) working in php so I know i was manipulating the correct php.ini! EDIT 2: there WAS indeed an apache error! Unable to load dynamic library '/usr/lib64/php/modules/memcached.so' - /usr/lib64/php/modules/memcached.so: undefined symbol: php_json_encode in Unknown on line 0
Memcached not showing up in phpinfo() I've installed libmemcached and memcached pecl extension for php and for some reason it's not installing correctly? i've got memcached.so in /usr/lib64/php/ with the right permissions and libmemcache.so in /usr/local/lib/ Everything seemed to build correctly without error, and I restarted apache? i also have the daemon installed. I somehow easily got the Memcache class easily installed for php before, but I realized what i wanted was the Memcached (note the d) class. let me know if more info is needed! EDIT: I previously had memcache (without the d) working in php so I know i was manipulating the correct php.ini! EDIT 2: there WAS indeed an apache error! Unable to load dynamic library '/usr/lib64/php/modules/memcached.so' - /usr/lib64/php/modules/memcached.so: undefined symbol: php_json_encode in Unknown on line 0
php, linux, memcached, redhat
4
15,308
5
https://stackoverflow.com/questions/5009450/memcached-not-showing-up-in-phpinfo
77,479,471
http_code 0 with Redhat php curl
I can use lynx to access a website from the server fine. When I try to access the same website with a php curl instruction I get http_code=>0 from curl_getinfo($ch); The same php code on another redhat server in the same data centre works fine. I have admin access to the server. I have tried php 7.4 and php 8.0. curl is enabled when I call phpinfo(); I am unsure how to debug this error! It is RHEL 8.8 This is the output from curl_getinfo($ch); Array ( [url] => [URL] [content_type] => [http_code] => 0 [header_size] => 0 [request_size] => 0 [filetime] => -1 [ssl_verify_result] => 0 [redirect_count] => 0 [total_time] => 0.055895 [namelookup_time] => 0.105978 [connect_time] => 0 [pretransfer_time] => 0 [size_upload] => 0 [size_download] => 0 [speed_download] => 0 [speed_upload] => 0 [download_content_length] => -1 [upload_content_length] => -1 [starttransfer_time] => 0 [redirect_time] => 0 [redirect_url] => [primary_ip] => [certinfo] => Array ( ) [primary_port] => 0 [local_ip] => [local_port] => 0 [http_version] => 0 [protocol] => 0 [ssl_verifyresult] => 0 [scheme] => [appconnect_time_us] => 0 [connect_time_us] => 0 [namelookup_time_us] => 105978 [pretransfer_time_us] => 0 [redirect_time_us] => 0 [starttransfer_time_us] => 0 [total_time_us] => 55895 )
http_code 0 with Redhat php curl I can use lynx to access a website from the server fine. When I try to access the same website with a php curl instruction I get http_code=>0 from curl_getinfo($ch); The same php code on another redhat server in the same data centre works fine. I have admin access to the server. I have tried php 7.4 and php 8.0. curl is enabled when I call phpinfo(); I am unsure how to debug this error! It is RHEL 8.8 This is the output from curl_getinfo($ch); Array ( [url] => [URL] [content_type] => [http_code] => 0 [header_size] => 0 [request_size] => 0 [filetime] => -1 [ssl_verify_result] => 0 [redirect_count] => 0 [total_time] => 0.055895 [namelookup_time] => 0.105978 [connect_time] => 0 [pretransfer_time] => 0 [size_upload] => 0 [size_download] => 0 [speed_download] => 0 [speed_upload] => 0 [download_content_length] => -1 [upload_content_length] => -1 [starttransfer_time] => 0 [redirect_time] => 0 [redirect_url] => [primary_ip] => [certinfo] => Array ( ) [primary_port] => 0 [local_ip] => [local_port] => 0 [http_version] => 0 [protocol] => 0 [ssl_verifyresult] => 0 [scheme] => [appconnect_time_us] => 0 [connect_time_us] => 0 [namelookup_time_us] => 105978 [pretransfer_time_us] => 0 [redirect_time_us] => 0 [starttransfer_time_us] => 0 [total_time_us] => 55895 )
php, redhat
4
198
2
https://stackoverflow.com/questions/77479471/http-code-0-with-redhat-php-curl
66,827,542
Strategy to keep Tomcat updated?
I maintain a number of RedHat Enterprise Linux (both 7 and 8) servers (>100) with different applications. To keep my sanity, I'm of course using tools such as Ansible, and more to the point of this questions, locally mirrored copies of public RPM repositories (using Satellite Server for the purpose). Updates are applied regularly from these repositories to keep the servers secure. A few of these servers need Apache Tomcat installed. This is one of the few applications that is, to my knowledge, not available from any RPM-based repository; it must be installed manually from a tarball. Updates are also manual (aided by an Ansible role, but I still have to be aware of the new version and manually change it). Are there any strategies to keep Tomcat up-to-date with little or no constant attention? Update: I found half of a solution to my problem. By default, Tomcat keeps the installation and the instance configuration mixed together in a single directory tree identified by CATALINA_HOME. That makes updating Tomcat without clobbering your configuration complicated. To solve that, you can put the instance-specific files in a separate directory tree identified with the CATALINA_BASE variable. Upgrading Tomcat then becomes as easy as: Download the new tarball. Untar it to a new location Review the readme and changelog for any breaking changes. Update the CATALINA_HOME variable to point to the new location, while keeping the CATALINA_BASE variable unchanged. Restart Tomcat, using the scripts in the new CATALINA_HOME bin directory. I am not providing code here because where and how you set CATALINA_HOME and CATALINA_BASE will vary. I set both variables in the service unit file that also starts Tomcat. Still open: finding a way to automatically find out when a new release of Tomcat is published.
Strategy to keep Tomcat updated? I maintain a number of RedHat Enterprise Linux (both 7 and 8) servers (>100) with different applications. To keep my sanity, I'm of course using tools such as Ansible, and more to the point of this questions, locally mirrored copies of public RPM repositories (using Satellite Server for the purpose). Updates are applied regularly from these repositories to keep the servers secure. A few of these servers need Apache Tomcat installed. This is one of the few applications that is, to my knowledge, not available from any RPM-based repository; it must be installed manually from a tarball. Updates are also manual (aided by an Ansible role, but I still have to be aware of the new version and manually change it). Are there any strategies to keep Tomcat up-to-date with little or no constant attention? Update: I found half of a solution to my problem. By default, Tomcat keeps the installation and the instance configuration mixed together in a single directory tree identified by CATALINA_HOME. That makes updating Tomcat without clobbering your configuration complicated. To solve that, you can put the instance-specific files in a separate directory tree identified with the CATALINA_BASE variable. Upgrading Tomcat then becomes as easy as: Download the new tarball. Untar it to a new location Review the readme and changelog for any breaking changes. Update the CATALINA_HOME variable to point to the new location, while keeping the CATALINA_BASE variable unchanged. Restart Tomcat, using the scripts in the new CATALINA_HOME bin directory. I am not providing code here because where and how you set CATALINA_HOME and CATALINA_BASE will vary. I set both variables in the service unit file that also starts Tomcat. Still open: finding a way to automatically find out when a new release of Tomcat is published.
tomcat, redhat
4
3,436
3
https://stackoverflow.com/questions/66827542/strategy-to-keep-tomcat-updated
64,339,661
Access user info from SecurityIdentity using quarkus-oidc
I am using quarkus-oidc with keycloak and I have the following resource package graphql; import io.quarkus.oidc.IdToken; import io.quarkus.oidc.UserInfo; import io.quarkus.security.Authenticated; import io.quarkus.security.identity.SecurityIdentity; import org.eclipse.microprofile.graphql.Description; import org.eclipse.microprofile.graphql.GraphQLApi; import org.eclipse.microprofile.graphql.Query; import org.eclipse.microprofile.jwt.JsonWebToken; import javax.inject.Inject; import javax.ws.rs.core.Context; import javax.ws.rs.core.SecurityContext; @GraphQLApi public class MeResource { @Inject SecurityIdentity securityIdentity; @Query @Authenticated public User me() { return new User(securityIdentity); } } My quarkus configurations is the following quarkus.oidc.auth-server-url=[URL] quarkus.oidc.discovery-enabled=true quarkus.oidc.client-id=my-app quarkus.oidc.credentials.secret=********** quarkus.oidc.enabled=true quarkus.keycloak.policy-enforcer.enable=true quarkus.http.port=8081 I am calling the query as follow curl --request POST \ --url [URL] \ --header 'authorization: bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICItVFJsRHlFWnB3MGRXRzd4cUZPajl6U2V6aklMTURUaFFkNl9YU0JKYzRJIn0.eyJleHAiOjE2MDI2Mzk2NTEsImlhdCI6MTYwMjYwNDk5MywiYXV0aF90aW1lIjoxNjAyNjAzNzMwLCJqdGkiOiI2NTI4OWZiMC1kNTQ1LTQ3NWQtYmQxZi05Mzk0OTQ1ODk2MGUiLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjgwODAvYXV0aC9yZWFsbXMvdm90ZXMiLCJzdWIiOiI1YzhkMGU5OS1jNGY3LTQ3NjctODFjYi0yYjU0ZDdiN2Q0NDUiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiJ2b3Rlcy1hcHAiLCJub25jZSI6IiIsInNlc3Npb25fc3RhdGUiOiJiM2Y2NjYyOS1hN2FiLTQ3OWItODFmZS0yOGU0MDI3MDVjMzEiLCJhY3IiOiIwIiwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoib3BlbmlkIHByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwibmFtZSI6IkFsYmVydG8gUGVsbGl6em9uIiwicHJlZmVycmVkX3VzZXJuYW1lIjoiYXBlbGxpenoiLCJnaXZlbl9uYW1lIjoiQWxiZXJ0byIsImZhbWlseV9uYW1lIjoiUGVsbGl6em9uIiwiZW1haWwiOiJhbGJlcnRvQG1pa2FtYWkuY29tIn0.hadUk8HzFnn0njk6U5za2N568QTX5w_opR8Vs7ub-hoyAWVND3fjyQJI9mpwKEvqp_ayWHyAcHoGAM16FXnjXKZqNl-iTbpgKNgV9-eMqU7NbR9UokZgGVUUs15cMANlihPiBm7919oG9zkzetNoo7h3ouXwQJwx5nLTIvDJAT-sCgUR5uygY_fEd5W6Jl8Z6kmzrAXKPJP2XSu3pMG0QdWT9Zz0IlW_g91H8bfY0g5OO4cgSuC5pZSu4UiLSKiGK45z_Y-7J-rosItYhIYiJ8v__ZeTeribpKbt14RfuvgVYpqbb6uAgYQkF6ho6sQMhg69sY6RieMG9jM07xCufw' \ --header 'content-type: application/json' \ --data '{"query":"query {\n me {\n name\n }\n}"}' The content of the jwt is { "exp": 1602639651, "iat": 1602604993, "auth_time": 1602603730, "jti": "65289fb0-d545-475d-bd1f-93949458960e", "iss": "[URL] "sub": "5c8d0e99-c4f7-4767-81cb-2b54d7b7d445", "typ": "Bearer", "azp": "votes-app", "nonce": "", "session_state": "b3f66629-a7ab-479b-81fe-28e402705c31", "acr": "0", "resource_access": { "account": { "roles": [ "manage-account", "manage-account-links", "view-profile" ] } }, "scope": "openid profile email", "email_verified": true, "name": "Alberto Pellizzon", "preferred_username": "apellizz", "given_name": "Alberto", "family_name": "Pellizzon", "email": "alberto@mikamai.com" } How can I access the user info stored in the token using quarkus oidc? I' have seen that there is an option quarkus.oidc.authentication.user-info-required=true which will be calling the keycloak user-info endpoint to resolve the info from the token but it seems it is only working for opaque tokens which keycloak does not provide!
Access user info from SecurityIdentity using quarkus-oidc I am using quarkus-oidc with keycloak and I have the following resource package graphql; import io.quarkus.oidc.IdToken; import io.quarkus.oidc.UserInfo; import io.quarkus.security.Authenticated; import io.quarkus.security.identity.SecurityIdentity; import org.eclipse.microprofile.graphql.Description; import org.eclipse.microprofile.graphql.GraphQLApi; import org.eclipse.microprofile.graphql.Query; import org.eclipse.microprofile.jwt.JsonWebToken; import javax.inject.Inject; import javax.ws.rs.core.Context; import javax.ws.rs.core.SecurityContext; @GraphQLApi public class MeResource { @Inject SecurityIdentity securityIdentity; @Query @Authenticated public User me() { return new User(securityIdentity); } } My quarkus configurations is the following quarkus.oidc.auth-server-url=[URL] quarkus.oidc.discovery-enabled=true quarkus.oidc.client-id=my-app quarkus.oidc.credentials.secret=********** quarkus.oidc.enabled=true quarkus.keycloak.policy-enforcer.enable=true quarkus.http.port=8081 I am calling the query as follow curl --request POST \ --url [URL] \ --header 'authorization: bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICItVFJsRHlFWnB3MGRXRzd4cUZPajl6U2V6aklMTURUaFFkNl9YU0JKYzRJIn0.eyJleHAiOjE2MDI2Mzk2NTEsImlhdCI6MTYwMjYwNDk5MywiYXV0aF90aW1lIjoxNjAyNjAzNzMwLCJqdGkiOiI2NTI4OWZiMC1kNTQ1LTQ3NWQtYmQxZi05Mzk0OTQ1ODk2MGUiLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjgwODAvYXV0aC9yZWFsbXMvdm90ZXMiLCJzdWIiOiI1YzhkMGU5OS1jNGY3LTQ3NjctODFjYi0yYjU0ZDdiN2Q0NDUiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiJ2b3Rlcy1hcHAiLCJub25jZSI6IiIsInNlc3Npb25fc3RhdGUiOiJiM2Y2NjYyOS1hN2FiLTQ3OWItODFmZS0yOGU0MDI3MDVjMzEiLCJhY3IiOiIwIiwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoib3BlbmlkIHByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwibmFtZSI6IkFsYmVydG8gUGVsbGl6em9uIiwicHJlZmVycmVkX3VzZXJuYW1lIjoiYXBlbGxpenoiLCJnaXZlbl9uYW1lIjoiQWxiZXJ0byIsImZhbWlseV9uYW1lIjoiUGVsbGl6em9uIiwiZW1haWwiOiJhbGJlcnRvQG1pa2FtYWkuY29tIn0.hadUk8HzFnn0njk6U5za2N568QTX5w_opR8Vs7ub-hoyAWVND3fjyQJI9mpwKEvqp_ayWHyAcHoGAM16FXnjXKZqNl-iTbpgKNgV9-eMqU7NbR9UokZgGVUUs15cMANlihPiBm7919oG9zkzetNoo7h3ouXwQJwx5nLTIvDJAT-sCgUR5uygY_fEd5W6Jl8Z6kmzrAXKPJP2XSu3pMG0QdWT9Zz0IlW_g91H8bfY0g5OO4cgSuC5pZSu4UiLSKiGK45z_Y-7J-rosItYhIYiJ8v__ZeTeribpKbt14RfuvgVYpqbb6uAgYQkF6ho6sQMhg69sY6RieMG9jM07xCufw' \ --header 'content-type: application/json' \ --data '{"query":"query {\n me {\n name\n }\n}"}' The content of the jwt is { "exp": 1602639651, "iat": 1602604993, "auth_time": 1602603730, "jti": "65289fb0-d545-475d-bd1f-93949458960e", "iss": "[URL] "sub": "5c8d0e99-c4f7-4767-81cb-2b54d7b7d445", "typ": "Bearer", "azp": "votes-app", "nonce": "", "session_state": "b3f66629-a7ab-479b-81fe-28e402705c31", "acr": "0", "resource_access": { "account": { "roles": [ "manage-account", "manage-account-links", "view-profile" ] } }, "scope": "openid profile email", "email_verified": true, "name": "Alberto Pellizzon", "preferred_username": "apellizz", "given_name": "Alberto", "family_name": "Pellizzon", "email": "alberto@mikamai.com" } How can I access the user info stored in the token using quarkus oidc? I' have seen that there is an option quarkus.oidc.authentication.user-info-required=true which will be calling the keycloak user-info endpoint to resolve the info from the token but it seems it is only working for opaque tokens which keycloak does not provide!
java, keycloak, openid-connect, redhat, quarkus
4
6,971
4
https://stackoverflow.com/questions/64339661/access-user-info-from-securityidentity-using-quarkus-oidc
59,728,223
Ansible Unable to set password for GRUB bootloader in Redhat
As Salam-o-Alikum, I have written an Ansible playbook for setting GRUB bootloader password on RedHat and Ubuntu, there are no error and i can see changes in Grub2.cfg on both locations. It's weird that when i reboot my both machines, Ubuntu machine asks for username and password but Redhat machine don't. I have seen 50+ tutorials procedure is the same and its pretty easy but i don't why its behaving like that. Any help would be greatly appreciated. Here is what i've tried. Hardening.yml --- - hosts: localhost become: yes gather_facts: true vars: grub_password_v1_passwd: puffersoft grub_password_v2_passwd: grub.pbkdf2.sha512.10000.A4DE89CBFB84A34253A71D5DD4939BED709AB2F24E909062A902D4751E91E3E82403D9D216BD506091CAA5E92AB958FBEF4B4B4B7CB0352F8191D47A9C93239F.0B07DD3D5AD46BF0F640136D448F2CFB84A6E05B76974C51B031C8B31D6F9B556802A28E95A5E65EC1F95983E24618EE2E9B21A0233AAA8D264781FE57DCE837 grub_user: cloud_user tasks: - stat: path=/sys/firmware/efi/efivars/ register: grub_efi - debug: vars=grub_efi when: ansible_distribution == 'Redhat' tags: grub-password - name: "Installing Template on Redhat Systems" template: src: grub-redhat.j2 dest: /etc/grub.d/01_users owner: root group: root mode: '0700' notify: - grub2-mkconfig EFI - grub2-mkconfig MBR when: ansible_distribution == 'Redhat' tags: grub-password - name: "Installing Template on Ubuntu Systems" template: src: grub-ubuntu.j2 dest: /etc/grub.d/40_custom owner: root group: root mode: '0700' notify: - grub2-mkconfig EFI - grub2-mkconfig MBR when: ansible_distribution == 'Ubuntu' tags: grub-password - name: "Grub EFI | Add Password" lineinfile: dest=/etc/grub2-efi.cfg regexp = "^password_pbkdf2 {{ grub_user }}" state=present insertafter = EOF line= "password_pbkdf2 {{ grub_user }} {{ grub_password_v2_passwd }}" when: grub_efi.stat.exists == True tags: grub-password - name: "Grub v2 MBR | Add Password" lineinfile: dest=/etc/grub2.cfg regexp = "^password_pbkdf2 {{ grub_user }}" state=present insertafter = EOF line= "password_pbkdf2 {{ grub_user }} {{ grub_password_v2_passwd }}" when: grub_efi.stat.exists == False - name: "grub2-mkconfig EFI" command: grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg when: grub_efi.stat.exists == True - name: "grub2-mkconfig MBR" command: grub2-mkconfig -o /boot/grub2/grub.cfg when: grub_efi.stat.exists == False grub-redhat.j2 #!/bin/sh -e cat << EOF if [ -f \${prefix}/user.cfg ]; then source \${prefix}/user.cfg if [ -n "\${GRUB2_PASSWORD}" ]; then set superusers="root" export superusers password_pbkdf2 root \${GRUB2_PASSWORD} fi fi set supperusers="{{ grub_user }}" password_pbkdf2 {{ grub_user }} {{ grub_password_v2_passwd }} EOF grub-ubuntu.j2 #!/bin/sh tail -n +3 $0 # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. set superusers="{{grub_user}}" password_pbkdf2 {{grub_user}} {{grub_password_v2_passwd}} Ansible Distribution Red Hat 7.7 Maipo and Ubuntu 18.04 bionic
Ansible Unable to set password for GRUB bootloader in Redhat As Salam-o-Alikum, I have written an Ansible playbook for setting GRUB bootloader password on RedHat and Ubuntu, there are no error and i can see changes in Grub2.cfg on both locations. It's weird that when i reboot my both machines, Ubuntu machine asks for username and password but Redhat machine don't. I have seen 50+ tutorials procedure is the same and its pretty easy but i don't why its behaving like that. Any help would be greatly appreciated. Here is what i've tried. Hardening.yml --- - hosts: localhost become: yes gather_facts: true vars: grub_password_v1_passwd: puffersoft grub_password_v2_passwd: grub.pbkdf2.sha512.10000.A4DE89CBFB84A34253A71D5DD4939BED709AB2F24E909062A902D4751E91E3E82403D9D216BD506091CAA5E92AB958FBEF4B4B4B7CB0352F8191D47A9C93239F.0B07DD3D5AD46BF0F640136D448F2CFB84A6E05B76974C51B031C8B31D6F9B556802A28E95A5E65EC1F95983E24618EE2E9B21A0233AAA8D264781FE57DCE837 grub_user: cloud_user tasks: - stat: path=/sys/firmware/efi/efivars/ register: grub_efi - debug: vars=grub_efi when: ansible_distribution == 'Redhat' tags: grub-password - name: "Installing Template on Redhat Systems" template: src: grub-redhat.j2 dest: /etc/grub.d/01_users owner: root group: root mode: '0700' notify: - grub2-mkconfig EFI - grub2-mkconfig MBR when: ansible_distribution == 'Redhat' tags: grub-password - name: "Installing Template on Ubuntu Systems" template: src: grub-ubuntu.j2 dest: /etc/grub.d/40_custom owner: root group: root mode: '0700' notify: - grub2-mkconfig EFI - grub2-mkconfig MBR when: ansible_distribution == 'Ubuntu' tags: grub-password - name: "Grub EFI | Add Password" lineinfile: dest=/etc/grub2-efi.cfg regexp = "^password_pbkdf2 {{ grub_user }}" state=present insertafter = EOF line= "password_pbkdf2 {{ grub_user }} {{ grub_password_v2_passwd }}" when: grub_efi.stat.exists == True tags: grub-password - name: "Grub v2 MBR | Add Password" lineinfile: dest=/etc/grub2.cfg regexp = "^password_pbkdf2 {{ grub_user }}" state=present insertafter = EOF line= "password_pbkdf2 {{ grub_user }} {{ grub_password_v2_passwd }}" when: grub_efi.stat.exists == False - name: "grub2-mkconfig EFI" command: grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg when: grub_efi.stat.exists == True - name: "grub2-mkconfig MBR" command: grub2-mkconfig -o /boot/grub2/grub.cfg when: grub_efi.stat.exists == False grub-redhat.j2 #!/bin/sh -e cat << EOF if [ -f \${prefix}/user.cfg ]; then source \${prefix}/user.cfg if [ -n "\${GRUB2_PASSWORD}" ]; then set superusers="root" export superusers password_pbkdf2 root \${GRUB2_PASSWORD} fi fi set supperusers="{{ grub_user }}" password_pbkdf2 {{ grub_user }} {{ grub_password_v2_passwd }} EOF grub-ubuntu.j2 #!/bin/sh tail -n +3 $0 # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. set superusers="{{grub_user}}" password_pbkdf2 {{grub_user}} {{grub_password_v2_passwd}} Ansible Distribution Red Hat 7.7 Maipo and Ubuntu 18.04 bionic
ansible, yaml, redhat, bootloader, grub2
4
7,817
1
https://stackoverflow.com/questions/59728223/ansible-unable-to-set-password-for-grub-bootloader-in-redhat
46,183,700
How to solve: RPC: Port mapper failure - RPC: Unable to receive errno = Connection refused
I'm trying to set up a NFS server. I have two programs server and client, I start the server which starts without errors, then I create a file with the client, the file is created correctly, but when I try to write something in that file I get the error: call failed: RPC: Unable to receive; errno = Connection refused And here is my rpcinfo -p output # rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 662 status 100024 1 tcp 662 status 100005 1 udp 892 mountd 100005 1 tcp 892 mountd 100005 2 udp 892 mountd 100005 2 tcp 892 mountd 100005 3 udp 892 mountd 100005 3 tcp 892 mountd 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 3 tcp 2049 nfs_acl 100003 3 udp 2049 nfs 100227 3 udp 2049 nfs_acl 100021 1 udp 58383 nlockmgr 100021 3 udp 58383 nlockmgr 100021 4 udp 58383 nlockmgr 100021 1 tcp 39957 nlockmgr 100021 3 tcp 39957 nlockmgr 100021 4 tcp 39957 nlockmgr 536870913 1 udp 997 536870913 1 tcp 999 Please does anyone know how can I solve this problem ? NOTE: I am using my laptop as server and client at the same time.
How to solve: RPC: Port mapper failure - RPC: Unable to receive errno = Connection refused I'm trying to set up a NFS server. I have two programs server and client, I start the server which starts without errors, then I create a file with the client, the file is created correctly, but when I try to write something in that file I get the error: call failed: RPC: Unable to receive; errno = Connection refused And here is my rpcinfo -p output # rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 662 status 100024 1 tcp 662 status 100005 1 udp 892 mountd 100005 1 tcp 892 mountd 100005 2 udp 892 mountd 100005 2 tcp 892 mountd 100005 3 udp 892 mountd 100005 3 tcp 892 mountd 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 3 tcp 2049 nfs_acl 100003 3 udp 2049 nfs 100227 3 udp 2049 nfs_acl 100021 1 udp 58383 nlockmgr 100021 3 udp 58383 nlockmgr 100021 4 udp 58383 nlockmgr 100021 1 tcp 39957 nlockmgr 100021 3 tcp 39957 nlockmgr 100021 4 tcp 39957 nlockmgr 536870913 1 udp 997 536870913 1 tcp 999 Please does anyone know how can I solve this problem ? NOTE: I am using my laptop as server and client at the same time.
redhat, rpc, mount, nfs
4
17,891
1
https://stackoverflow.com/questions/46183700/how-to-solve-rpc-port-mapper-failure-rpc-unable-to-receive-errno-connecti
30,166,825
multiple KVM guests script using virt-install
I would like install 3 KVM guests automatically using kickstart. I have no problem installing it manually using virt-install command. virt-install \ -n dal \ -r 2048 \ --vcpus=1 \ --os-variant=rhel6 \ --accelerate \ --network bridge:br1,model=virtio \ --disk path=/home/dal_internal,size=128 --force \ --location="/home/kvm.iso" \ --nographics \ --extra-args="ks=file:/dal_kick.cfg console=tty0 console=ttyS0,115200n8 serial" \ --initrd-inject=/opt/dal_kick.cfg \ --virt-type kvm I have 3 scripts like the one above - i would like to install all 3 at the same time, how can i disable the console? or running it in the background?
multiple KVM guests script using virt-install I would like install 3 KVM guests automatically using kickstart. I have no problem installing it manually using virt-install command. virt-install \ -n dal \ -r 2048 \ --vcpus=1 \ --os-variant=rhel6 \ --accelerate \ --network bridge:br1,model=virtio \ --disk path=/home/dal_internal,size=128 --force \ --location="/home/kvm.iso" \ --nographics \ --extra-args="ks=file:/dal_kick.cfg console=tty0 console=ttyS0,115200n8 serial" \ --initrd-inject=/opt/dal_kick.cfg \ --virt-type kvm I have 3 scripts like the one above - i would like to install all 3 at the same time, how can i disable the console? or running it in the background?
automation, virtual, redhat, openstack, kvm
4
2,432
3
https://stackoverflow.com/questions/30166825/multiple-kvm-guests-script-using-virt-install
11,005,396
how to write a bash shell script to ssh to remote machine and change user and export a env variable and do other commands
I have a webservice that runs on multiple different remote redhat machines. Whenever I want to update the service I will sync down the new webservice source code written in perl from a version control depot(I use perforce) and restart the service using that new synced down perl code. I think it is too boring to log to remote machines one by one and do that series of commands to restart the service one by one manully. So I wrote a bash script update.sh like below in order to "do it one time one place, update all machines". I will run this shell script in my local machine. But it seems that it won't work. It only execute the first command "sudo -u webservice_username -i" as I can tell from the command line in my local machine. (The code below only shows how it will update one of the remote webservice. The "export P4USER=myname" is for usage of perforce client) #!/bin/sh ssh myname@remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log' Why I know the only first command is executed? Well because after I input the password for the ssh on my local machine, it shows: Your environment has been modified. Please check /tmp/webservice.env. And it just gets stuck there. I mean no return. As suggested by a commentor, I added "-t" for ssh #!/bin/sh ssh -t myname@remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log' This would let the local commandline return. But it seems weird, it cannot cd to that "dir", it says "cd:dir: No such file or directory" it also says "p4: command not found". So it looks like the sudo -u command executes with no effect and the export command has either not executed or excuted with no effect. A detailed local log file is like below: Your environment has been modified. Please check /tmp/dir/.env. bash: line 0: cd: dir: No such file or directory bash: p4: command not found bash: line 0: cd: bin: No such file or directory bash: ./prog: No such file or directory tail: cannot open `../logs/service.log' for reading: No such file or directory tail: no files remaining
how to write a bash shell script to ssh to remote machine and change user and export a env variable and do other commands I have a webservice that runs on multiple different remote redhat machines. Whenever I want to update the service I will sync down the new webservice source code written in perl from a version control depot(I use perforce) and restart the service using that new synced down perl code. I think it is too boring to log to remote machines one by one and do that series of commands to restart the service one by one manully. So I wrote a bash script update.sh like below in order to "do it one time one place, update all machines". I will run this shell script in my local machine. But it seems that it won't work. It only execute the first command "sudo -u webservice_username -i" as I can tell from the command line in my local machine. (The code below only shows how it will update one of the remote webservice. The "export P4USER=myname" is for usage of perforce client) #!/bin/sh ssh myname@remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log' Why I know the only first command is executed? Well because after I input the password for the ssh on my local machine, it shows: Your environment has been modified. Please check /tmp/webservice.env. And it just gets stuck there. I mean no return. As suggested by a commentor, I added "-t" for ssh #!/bin/sh ssh -t myname@remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log' This would let the local commandline return. But it seems weird, it cannot cd to that "dir", it says "cd:dir: No such file or directory" it also says "p4: command not found". So it looks like the sudo -u command executes with no effect and the export command has either not executed or excuted with no effect. A detailed local log file is like below: Your environment has been modified. Please check /tmp/dir/.env. bash: line 0: cd: dir: No such file or directory bash: p4: command not found bash: line 0: cd: bin: No such file or directory bash: ./prog: No such file or directory tail: cannot open `../logs/service.log' for reading: No such file or directory tail: no files remaining
linux, bash, ssh, perforce, redhat
4
18,504
3
https://stackoverflow.com/questions/11005396/how-to-write-a-bash-shell-script-to-ssh-to-remote-machine-and-change-user-and-ex
8,228,807
BASH shell script echo to output on same line
I have a simple BASH shell script which checks the HTTP response code of a curl command. The logic is fine, but I am stuck on "simply" printing out the "output". I am using GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) I would like to output the URL with a tab - then the 404|200|501|502 response. For example: [URL] I am also getting a strange error where the "http" part of a URL is being overwritten with the 200|404|501|502. Is there a basic BASH shell scripting (feature) which I am not using? thanks Miles. #!/bin/bash NAMES=cat $1 for i in $NAMES do URL=$i statuscode=curl -s -I -L $i |grep 'HTTP' | awk '{print $2}' case $statuscode in 200) echo -ne $URL\t$statuscode;; 301) echo -ne "\t $statuscode";; 302) echo -ne "\t $statuscode";; 404) echo -ne "\t $statuscode";; esac done
BASH shell script echo to output on same line I have a simple BASH shell script which checks the HTTP response code of a curl command. The logic is fine, but I am stuck on "simply" printing out the "output". I am using GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) I would like to output the URL with a tab - then the 404|200|501|502 response. For example: [URL] I am also getting a strange error where the "http" part of a URL is being overwritten with the 200|404|501|502. Is there a basic BASH shell scripting (feature) which I am not using? thanks Miles. #!/bin/bash NAMES=cat $1 for i in $NAMES do URL=$i statuscode=curl -s -I -L $i |grep 'HTTP' | awk '{print $2}' case $statuscode in 200) echo -ne $URL\t$statuscode;; 301) echo -ne "\t $statuscode";; 302) echo -ne "\t $statuscode";; 404) echo -ne "\t $statuscode";; esac done
bash, curl, redhat
4
30,568
3
https://stackoverflow.com/questions/8228807/bash-shell-script-echo-to-output-on-same-line
4,320,895
Java memory leak when running on Red Hat but no memory leak on Mac OS X
I have a java webobjects app which is showing memory leak problems when running on Red Hat but we had no such problems when it was running on Mac OS X. The JVMs are similar. Mac OS X 10.6.5 using java 1.6.0_22 64 bit from Apple Red Hat EL 5.0 using java 1.6.0_20 64 bit from Sun I configured it to do a heap dump when it ran out of memory, and analysing this with the eclipse memory analyzer tool suggests that the problem is in a part of the code which creates a thread which sends an HTTP Request to a web service. The reason for creating the thread is to implement a timeout on the request because the web service is sometimes not available. Does anyone have any ideas? WOHTTPConnection connection = new WOHTTPConnection(host, port); WORequest request = new WORequest(strMethod, strQuery, strHttpVersion, nsdHeader, content, null); WebServiceRequester theRequester = new WebServiceRequester(connection, request); Thread requestThread = new Thread(theRequester); requestThread.start(); try { requestThread.join(intTimeoutSend); //timeout in milliseconds = 10000 if ( requestThread.isAlive() ) { requestThread.interrupt(); } } catch(InterruptedException e) { } requestThread = null; if(!theRequester.getTfSent()) { return null; } WOResponse response = connection.readResponse(); ... class WebServiceRequester implements Runnable { private WORequest theRequest; private WOHTTPConnection theConnection; private boolean tfSent = false; public WebServiceRequester(WOHTTPConnection c, WORequest r) { theConnection = c; theRequest = r; } public void run() { tfSent = theConnection.sendRequest(theRequest); } public boolean getTfSent() { return tfSent; } } EDIT: leaked class names as reported by eclipse memory analyzer tool: 1,296 instances of "java.lang.Thread", loaded by "<system class loader>" occupy 111,947,632 (43.21%) bytes. 1,292 instances of "er.extensions.eof.ERXEC", loaded by "java.net.URLClassLoader @ 0x2aaab375b7c0" occupy 37,478,352 (14.46%) bytes. 1,280 instances of "er.extensions.appserver.ERXRequest", loaded by "java.net.URLClassLoader @ 0x2aaab375b7c0" occupy 27,297,992 (10.54%) bytes.
Java memory leak when running on Red Hat but no memory leak on Mac OS X I have a java webobjects app which is showing memory leak problems when running on Red Hat but we had no such problems when it was running on Mac OS X. The JVMs are similar. Mac OS X 10.6.5 using java 1.6.0_22 64 bit from Apple Red Hat EL 5.0 using java 1.6.0_20 64 bit from Sun I configured it to do a heap dump when it ran out of memory, and analysing this with the eclipse memory analyzer tool suggests that the problem is in a part of the code which creates a thread which sends an HTTP Request to a web service. The reason for creating the thread is to implement a timeout on the request because the web service is sometimes not available. Does anyone have any ideas? WOHTTPConnection connection = new WOHTTPConnection(host, port); WORequest request = new WORequest(strMethod, strQuery, strHttpVersion, nsdHeader, content, null); WebServiceRequester theRequester = new WebServiceRequester(connection, request); Thread requestThread = new Thread(theRequester); requestThread.start(); try { requestThread.join(intTimeoutSend); //timeout in milliseconds = 10000 if ( requestThread.isAlive() ) { requestThread.interrupt(); } } catch(InterruptedException e) { } requestThread = null; if(!theRequester.getTfSent()) { return null; } WOResponse response = connection.readResponse(); ... class WebServiceRequester implements Runnable { private WORequest theRequest; private WOHTTPConnection theConnection; private boolean tfSent = false; public WebServiceRequester(WOHTTPConnection c, WORequest r) { theConnection = c; theRequest = r; } public void run() { tfSent = theConnection.sendRequest(theRequest); } public boolean getTfSent() { return tfSent; } } EDIT: leaked class names as reported by eclipse memory analyzer tool: 1,296 instances of "java.lang.Thread", loaded by "<system class loader>" occupy 111,947,632 (43.21%) bytes. 1,292 instances of "er.extensions.eof.ERXEC", loaded by "java.net.URLClassLoader @ 0x2aaab375b7c0" occupy 37,478,352 (14.46%) bytes. 1,280 instances of "er.extensions.appserver.ERXRequest", loaded by "java.net.URLClassLoader @ 0x2aaab375b7c0" occupy 27,297,992 (10.54%) bytes.
java, macos, memory-leaks, redhat, webobjects
4
930
3
https://stackoverflow.com/questions/4320895/java-memory-leak-when-running-on-red-hat-but-no-memory-leak-on-mac-os-x
79,623,234
std::atomic::is_lock_free() shows true but pthread_mutex_lock() called
I have an atomic variable which contains a 16-bytes member variable, and I hope the load/store operation on it will be lock-free, because it could be achieved by cmpxchg16b . here I have sample code. #include <atomic> #include <iostream> int pthread_mutex_lock(pthread_mutex_t *mutex) { std::cout << "in pthread_mutex_lock" << std::endl; return 0; } int main() { std::atomic<__int128> var; std::cout << "is_lock_free: " << var.is_lock_free() << std::endl; var.load(); } When compiled by g++ test.cpp -latomic , the result is is_lock_free: 1 The instruction executed in var.load() is 0x7ffff7bd6740 push %rbx 0x7ffff7bd6741 xor %ecx,%ecx 0x7ffff7bd6743 mov %rcx,%rbx 0x7ffff7bd6746 sub $0x20,%rsp 0x7ffff7bd674a movq $0x0,0x8(%rsp) 0x7ffff7bd6753 mov 0x8(%rsp),%rdx 0x7ffff7bd6758 mov %fs:0x28,%rax 0x7ffff7bd6761 mov %rax,0x18(%rsp) 0x7ffff7bd6766 xor %eax,%eax 0x7ffff7bd6768 movq $0x0,(%rsp) 0x7ffff7bd6770 lock cmpxchg16b (%rdi) 0x7ffff7bd6775 je 0x7ffff7bd6780 0x7ffff7bd6777 mov %rax,(%rsp) 0x7ffff7bd677b mov %rdx,0x8(%rsp) 0x7ffff7bd6780 mov 0x18(%rsp),%rsi 0x7ffff7bd6785 xor %fs:0x28,%rsi 0x7ffff7bd678e mov (%rsp),%rax 0x7ffff7bd6792 mov 0x8(%rsp),%rdx 0x7ffff7bd6797 jne 0x7ffff7bd679f 0x7ffff7bd6799 add $0x20,%rsp 0x7ffff7bd679d pop %rbx 0x7ffff7bd679e retq and is_lock_free is executed as following cmp $0x10,%rdi ja 0x7ffff7bd5908 <__atomic_is_lock_free+136> lea 0x17d3(%rip),%rax # 0x7ffff7bd7064 movslq (%rax,%rdi,4),%rdx add %rdx,%rax jmpq *%rax nopw 0x0(%rax,%rax,1) test $0x3,%sil je 0x7ffff7bd58be <__atomic_is_lock_free+62> and $0x7,%esi add %rsi,%rdi cmp $0x8,%rdi setbe %al retq nopl 0x0(%rax) test $0x1,%sil jne 0x7ffff7bd58c8 <__atomic_is_lock_free+72> mov $0x1,%eax retq nopl 0x0(%rax) mov %rsi,%rdx mov $0x1,%eax and $0x3,%edx add %rdi,%rdx cmp $0x4,%rdx ja 0x7ffff7bd58a6 <__atomic_is_lock_free+38> repz retq xchg %ax,%ax and $0x7,%esi sete %al retq nopw 0x0(%rax,%rax,1) xor %eax,%eax and $0xf,%esi jne 0x7ffff7bd58dc <__atomic_is_lock_free+92> mov 0x2047a3(%rip),%eax # 0x7ffff7dda0a0 shr $0xd,%eax and $0x1,%eax retq nopl 0x0(%rax) xor %eax,%eax retq But when compiled by g++ test.cpp -latomic -Wl,-z,now , the result is is_lock_free: 1 in pthread_mutex_lock Now the instruction executed is 0x7ffff7bd6050 push %rbx 0x7ffff7bd6051 mov %rdi,%rbx 0x7ffff7bd6054 sub $0x10,%rsp 0x7ffff7bd6058 callq 0x7ffff7bd5910 0x7ffff7bd605d mov (%rbx),%rax 0x7ffff7bd6060 mov 0x8(%rbx),%rdx 0x7ffff7bd6064 mov %rbx,%rdi 0x7ffff7bd6067 mov %rax,(%rsp) 0x7ffff7bd606b mov %rdx,0x8(%rsp) 0x7ffff7bd6070 callq 0x7ffff7bd5930 0x7ffff7bd6075 mov (%rsp),%rax 0x7ffff7bd6079 mov 0x8(%rsp),%rdx 0x7ffff7bd607e add $0x10,%rsp 0x7ffff7bd6082 pop %rbx With gdb step in 0x7ffff7bd5910 , it calls pthread_mutex_lock , it shows load is implemented by lock. Why atomic has different behaviour with its output? And How does -Wl,-z,now cause it? How could I ensure 16-bytes load/store is lock-free? My enviornment is [test@15bf6105d708 test]$> gcc --version gcc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3) Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [test@15bf6105d708 test]$> uname -a Linux 15bf6105d708 5.4.119-19-0009.11 #1 SMP Wed Oct 5 18:41:07 CST 2022 x86_64 x86_64 x86_64 GNU/Linux [test@15bf6105d708 test]$> lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7K62 48-Core Processor Stepping: 0 CPU MHz: 2595.124 BogoMIPS: 5190.24 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 4096K L3 cache: 16384K NUMA node0 CPU(s): 0-15 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 arat
std::atomic::is_lock_free() shows true but pthread_mutex_lock() called I have an atomic variable which contains a 16-bytes member variable, and I hope the load/store operation on it will be lock-free, because it could be achieved by cmpxchg16b . here I have sample code. #include <atomic> #include <iostream> int pthread_mutex_lock(pthread_mutex_t *mutex) { std::cout << "in pthread_mutex_lock" << std::endl; return 0; } int main() { std::atomic<__int128> var; std::cout << "is_lock_free: " << var.is_lock_free() << std::endl; var.load(); } When compiled by g++ test.cpp -latomic , the result is is_lock_free: 1 The instruction executed in var.load() is 0x7ffff7bd6740 push %rbx 0x7ffff7bd6741 xor %ecx,%ecx 0x7ffff7bd6743 mov %rcx,%rbx 0x7ffff7bd6746 sub $0x20,%rsp 0x7ffff7bd674a movq $0x0,0x8(%rsp) 0x7ffff7bd6753 mov 0x8(%rsp),%rdx 0x7ffff7bd6758 mov %fs:0x28,%rax 0x7ffff7bd6761 mov %rax,0x18(%rsp) 0x7ffff7bd6766 xor %eax,%eax 0x7ffff7bd6768 movq $0x0,(%rsp) 0x7ffff7bd6770 lock cmpxchg16b (%rdi) 0x7ffff7bd6775 je 0x7ffff7bd6780 0x7ffff7bd6777 mov %rax,(%rsp) 0x7ffff7bd677b mov %rdx,0x8(%rsp) 0x7ffff7bd6780 mov 0x18(%rsp),%rsi 0x7ffff7bd6785 xor %fs:0x28,%rsi 0x7ffff7bd678e mov (%rsp),%rax 0x7ffff7bd6792 mov 0x8(%rsp),%rdx 0x7ffff7bd6797 jne 0x7ffff7bd679f 0x7ffff7bd6799 add $0x20,%rsp 0x7ffff7bd679d pop %rbx 0x7ffff7bd679e retq and is_lock_free is executed as following cmp $0x10,%rdi ja 0x7ffff7bd5908 <__atomic_is_lock_free+136> lea 0x17d3(%rip),%rax # 0x7ffff7bd7064 movslq (%rax,%rdi,4),%rdx add %rdx,%rax jmpq *%rax nopw 0x0(%rax,%rax,1) test $0x3,%sil je 0x7ffff7bd58be <__atomic_is_lock_free+62> and $0x7,%esi add %rsi,%rdi cmp $0x8,%rdi setbe %al retq nopl 0x0(%rax) test $0x1,%sil jne 0x7ffff7bd58c8 <__atomic_is_lock_free+72> mov $0x1,%eax retq nopl 0x0(%rax) mov %rsi,%rdx mov $0x1,%eax and $0x3,%edx add %rdi,%rdx cmp $0x4,%rdx ja 0x7ffff7bd58a6 <__atomic_is_lock_free+38> repz retq xchg %ax,%ax and $0x7,%esi sete %al retq nopw 0x0(%rax,%rax,1) xor %eax,%eax and $0xf,%esi jne 0x7ffff7bd58dc <__atomic_is_lock_free+92> mov 0x2047a3(%rip),%eax # 0x7ffff7dda0a0 shr $0xd,%eax and $0x1,%eax retq nopl 0x0(%rax) xor %eax,%eax retq But when compiled by g++ test.cpp -latomic -Wl,-z,now , the result is is_lock_free: 1 in pthread_mutex_lock Now the instruction executed is 0x7ffff7bd6050 push %rbx 0x7ffff7bd6051 mov %rdi,%rbx 0x7ffff7bd6054 sub $0x10,%rsp 0x7ffff7bd6058 callq 0x7ffff7bd5910 0x7ffff7bd605d mov (%rbx),%rax 0x7ffff7bd6060 mov 0x8(%rbx),%rdx 0x7ffff7bd6064 mov %rbx,%rdi 0x7ffff7bd6067 mov %rax,(%rsp) 0x7ffff7bd606b mov %rdx,0x8(%rsp) 0x7ffff7bd6070 callq 0x7ffff7bd5930 0x7ffff7bd6075 mov (%rsp),%rax 0x7ffff7bd6079 mov 0x8(%rsp),%rdx 0x7ffff7bd607e add $0x10,%rsp 0x7ffff7bd6082 pop %rbx With gdb step in 0x7ffff7bd5910 , it calls pthread_mutex_lock , it shows load is implemented by lock. Why atomic has different behaviour with its output? And How does -Wl,-z,now cause it? How could I ensure 16-bytes load/store is lock-free? My enviornment is [test@15bf6105d708 test]$> gcc --version gcc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3) Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [test@15bf6105d708 test]$> uname -a Linux 15bf6105d708 5.4.119-19-0009.11 #1 SMP Wed Oct 5 18:41:07 CST 2022 x86_64 x86_64 x86_64 GNU/Linux [test@15bf6105d708 test]$> lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7K62 48-Core Processor Stepping: 0 CPU MHz: 2595.124 BogoMIPS: 5190.24 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 4096K L3 cache: 16384K NUMA node0 CPU(s): 0-15 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 arat
c++, gcc, atomic, redhat, stdatomic
4
211
1
https://stackoverflow.com/questions/79623234/stdatomicis-lock-free-shows-true-but-pthread-mutex-lock-called
54,582,864
missing Qt libs with wkhtmltopdf in docker on debian buster
I have a docker container running debian buster and I want to run wkhtmltopdf in it. I have 2 host machines, both identical, both running the same container build with the same Dockerfile. Both are running the same version of docker. On one machine wkhtmltopdf works fine, but on the other I get: wkhtmltopdf: error while loading shared libraries: libQt5Core.so.5: cannot open shared object file: No such file or directory On the machine where it works: # ldd /usr/bin/wkhtmltopdf | grep libQt5Core libQt5Core.so.5 => /lib/x86_64-linux-gnu/libQt5Core.so.5 (0x00007f8da6f2f000) # ls -l /lib/x86_64-linux-gnu/libQt5Core.so.5* lrwxrwxrwx. 1 root root 19 Dec 4 2017 /lib/x86_64-linux-gnu/libQt5Core.so.5 -> libQt5Core.so.5.9.2 lrwxrwxrwx. 1 root root 19 Dec 4 2017 /lib/x86_64-linux-gnu/libQt5Core.so.5.9 -> libQt5Core.so.5.9.2 -rw-r--r--. 1 root root 5138560 Dec 4 2017 /lib/x86_64-linux-gnu/libQt5Core.so.5.9.2 And on the machine where it doesn't work: # ldd /usr/bin/wkhtmltopdf | grep libQt5Core libQt5Core.so.5 => not found # ls -l /lib/x86_64-linux-gnu/libQt5Core.so.5* lrwxrwxrwx. 1 root root 20 Nov 18 16:36 /lib/x86_64-linux-gnu/libQt5Core.so.5 -> libQt5Core.so.5.11.2 lrwxrwxrwx. 1 root root 20 Nov 18 16:36 /lib/x86_64-linux-gnu/libQt5Core.so.5.11 -> libQt5Core.so.5.11.2 -rw-r--r--. 1 root root 5196040 Nov 18 16:36 /lib/x86_64-linux-gnu/libQt5Core.so.5.11.2 Now I do not explicitly install Qt - I assume it gets installed as a dependency of wkhtmltopdf. Here are the versions of everything, same on both machines: Inside container: # cat /etc/debian_version buster/sid # wkhtmltopdf -V wkhtmltopdf 0.12.4 Outside container: # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.6 (Maipo) # docker -v Docker version 17.06.2-ee-18, build c78b5e1 Anyone have any clue what is going on and how I can get it working? Why are the version of libQt5Core different? Why doesn't it find it on the non working machine. I did try copying and linking libQt5Core.so.5.9 from the working machine to the non working but that did not fix it. This is really vexing me.
missing Qt libs with wkhtmltopdf in docker on debian buster I have a docker container running debian buster and I want to run wkhtmltopdf in it. I have 2 host machines, both identical, both running the same container build with the same Dockerfile. Both are running the same version of docker. On one machine wkhtmltopdf works fine, but on the other I get: wkhtmltopdf: error while loading shared libraries: libQt5Core.so.5: cannot open shared object file: No such file or directory On the machine where it works: # ldd /usr/bin/wkhtmltopdf | grep libQt5Core libQt5Core.so.5 => /lib/x86_64-linux-gnu/libQt5Core.so.5 (0x00007f8da6f2f000) # ls -l /lib/x86_64-linux-gnu/libQt5Core.so.5* lrwxrwxrwx. 1 root root 19 Dec 4 2017 /lib/x86_64-linux-gnu/libQt5Core.so.5 -> libQt5Core.so.5.9.2 lrwxrwxrwx. 1 root root 19 Dec 4 2017 /lib/x86_64-linux-gnu/libQt5Core.so.5.9 -> libQt5Core.so.5.9.2 -rw-r--r--. 1 root root 5138560 Dec 4 2017 /lib/x86_64-linux-gnu/libQt5Core.so.5.9.2 And on the machine where it doesn't work: # ldd /usr/bin/wkhtmltopdf | grep libQt5Core libQt5Core.so.5 => not found # ls -l /lib/x86_64-linux-gnu/libQt5Core.so.5* lrwxrwxrwx. 1 root root 20 Nov 18 16:36 /lib/x86_64-linux-gnu/libQt5Core.so.5 -> libQt5Core.so.5.11.2 lrwxrwxrwx. 1 root root 20 Nov 18 16:36 /lib/x86_64-linux-gnu/libQt5Core.so.5.11 -> libQt5Core.so.5.11.2 -rw-r--r--. 1 root root 5196040 Nov 18 16:36 /lib/x86_64-linux-gnu/libQt5Core.so.5.11.2 Now I do not explicitly install Qt - I assume it gets installed as a dependency of wkhtmltopdf. Here are the versions of everything, same on both machines: Inside container: # cat /etc/debian_version buster/sid # wkhtmltopdf -V wkhtmltopdf 0.12.4 Outside container: # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.6 (Maipo) # docker -v Docker version 17.06.2-ee-18, build c78b5e1 Anyone have any clue what is going on and how I can get it working? Why are the version of libQt5Core different? Why doesn't it find it on the non working machine. I did try copying and linking libQt5Core.so.5.9 from the working machine to the non working but that did not fix it. This is really vexing me.
docker, qt5, redhat, wkhtmltopdf, debian-buster
4
2,128
1
https://stackoverflow.com/questions/54582864/missing-qt-libs-with-wkhtmltopdf-in-docker-on-debian-buster
51,705,764
Is there an official, &quot;correct&quot; way to define JAVA_HOME on RedHat systems with OpenJDK?
As do many others, RedHat systems and their derivations, such as CentOS and Fedora, use the alternatives mechanism to support the use of different major versions of OpenJDK. This results in there being many candidates as the value for the JAVA_HOME environment variable, such as: /etc/alternatives/jre /etc/alternatives/java_sdk /etc/alternatives/java_sdk_1.x.0 /etc/alternatives/java_sdk_openjdk /usr/lib/jvm/java /usr/lib/jvm/java-1.x.0 Is any of these to be considered the official, standard choice? Note that I'm aware of the difference between choices that do or do not include the Java version in their name. I also consciously omitted names that include minor version information, as they would need to be modified after each update. By the way, all of the above are symbolic links. The actual installation directories are found in /usr/lib/jvm and include the specific version in their name. e.g. java-1.8.0-openjdk-1.8.0.131-0.b11.el6_9.x86_64 .
Is there an official, &quot;correct&quot; way to define JAVA_HOME on RedHat systems with OpenJDK? As do many others, RedHat systems and their derivations, such as CentOS and Fedora, use the alternatives mechanism to support the use of different major versions of OpenJDK. This results in there being many candidates as the value for the JAVA_HOME environment variable, such as: /etc/alternatives/jre /etc/alternatives/java_sdk /etc/alternatives/java_sdk_1.x.0 /etc/alternatives/java_sdk_openjdk /usr/lib/jvm/java /usr/lib/jvm/java-1.x.0 Is any of these to be considered the official, standard choice? Note that I'm aware of the difference between choices that do or do not include the Java version in their name. I also consciously omitted names that include minor version information, as they would need to be modified after each update. By the way, all of the above are symbolic links. The actual installation directories are found in /usr/lib/jvm and include the specific version in their name. e.g. java-1.8.0-openjdk-1.8.0.131-0.b11.el6_9.x86_64 .
java, centos, redhat, fedora, java-home
4
395
1
https://stackoverflow.com/questions/51705764/is-there-an-official-correct-way-to-define-java-home-on-redhat-systems-with-o
48,664,440
Openshift route is not accepting traffic yet because it has not been admitted by a router
We have deployed a container from Apache HTTP Server (httpd) 2.4 template. After deployment successful, we are facing issue for assign route. Error: The route is not accepting traffic yet because it has not been admitted by a router. Version OpenShift Master: v3.7.0+7ed6862 Kubernetes Master: v1.7.6+a08f5eeb62
Openshift route is not accepting traffic yet because it has not been admitted by a router We have deployed a container from Apache HTTP Server (httpd) 2.4 template. After deployment successful, we are facing issue for assign route. Error: The route is not accepting traffic yet because it has not been admitted by a router. Version OpenShift Master: v3.7.0+7ed6862 Kubernetes Master: v1.7.6+a08f5eeb62
openshift, redhat, openshift-origin
4
4,067
2
https://stackoverflow.com/questions/48664440/openshift-route-is-not-accepting-traffic-yet-because-it-has-not-been-admitted-by
45,088,013
How to read the output of gcc -v
If I run gcc -v or g++ -v I get below results. gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) How do I understand this? What is (Red Hat 4.4.7-16) and what is (GCC) Is it the OS on which this version of gcc is compiled or the is the Generation of the OS on which this version of GCC is compatible with?
How to read the output of gcc -v If I run gcc -v or g++ -v I get below results. gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) How do I understand this? What is (Red Hat 4.4.7-16) and what is (GCC) Is it the OS on which this version of gcc is compiled or the is the Generation of the OS on which this version of GCC is compatible with?
gcc, redhat
4
848
2
https://stackoverflow.com/questions/45088013/how-to-read-the-output-of-gcc-v
43,982,681
Missing separate debuginfos, use: debuginfo-install glibc-2.17-157.el7_3.1.x86_64
I know this question is answered already in another thread, however I tried all the solutions given in the other thread including - Searching for the package, trying to install the package, installing yum-utils and debuginfo-install glibc Finally, I even set enabled=1 and gpgcheck=0 in redhat.repo under /etc/yum.repos.d, what else should be done for me to get rid of this error? What I am trying to do is, debug a program(using gdb) with a shared object library. The program and .so file are both compiled on the same server(Redhat Maipo) and I am still seeing this error. I can't step through the code as a result - or are the two unrelated?
Missing separate debuginfos, use: debuginfo-install glibc-2.17-157.el7_3.1.x86_64 I know this question is answered already in another thread, however I tried all the solutions given in the other thread including - Searching for the package, trying to install the package, installing yum-utils and debuginfo-install glibc Finally, I even set enabled=1 and gpgcheck=0 in redhat.repo under /etc/yum.repos.d, what else should be done for me to get rid of this error? What I am trying to do is, debug a program(using gdb) with a shared object library. The program and .so file are both compiled on the same server(Redhat Maipo) and I am still seeing this error. I can't step through the code as a result - or are the two unrelated?
debugging, gdb, redhat, yum
4
2,634
1
https://stackoverflow.com/questions/43982681/missing-separate-debuginfos-use-debuginfo-install-glibc-2-17-157-el7-3-1-x86-6
42,029,093
mysqld: unrecognized service
I'm trying to install a MySql Server on Red Hat Linux. I've downloaded the tar file and unarchived it. Then, I ran: rpm -qpl mysql-community-server-5.7.17-1.e16.x86_64.rpm Then I tried running service mysqld start But I'm getting mysqld: unrecognized service I also tried using the full path to the mysqld : service /usr/sbin/mysqld/ start but that shows the same issue. Any idea what is wrong?
mysqld: unrecognized service I'm trying to install a MySql Server on Red Hat Linux. I've downloaded the tar file and unarchived it. Then, I ran: rpm -qpl mysql-community-server-5.7.17-1.e16.x86_64.rpm Then I tried running service mysqld start But I'm getting mysqld: unrecognized service I also tried using the full path to the mysqld : service /usr/sbin/mysqld/ start but that shows the same issue. Any idea what is wrong?
mysql, redhat
4
5,022
1
https://stackoverflow.com/questions/42029093/mysqld-unrecognized-service
38,856,659
Number of subdirectories in a directory?
How to find the number of subdirectories in a specified directory in HDFS? When I do hadoop fs -ls /mydir/ , I get a Java heap space error, since the directory is too big, but what I am interested in is the number of subdirectories in that directory. I tried: gsamaras@gwta3000 ~]$ hadoop fs -find /mydir/ -maxdepth 1 -type d -print| wc -l find: Unexpected argument: -maxdepth 0 I know that the directory is not empty, thus 0 is not correct: [gsamaras@gwta3000 ~]$ hadoop fs -du -s -h /mydir 737.5 G /mydir
Number of subdirectories in a directory? How to find the number of subdirectories in a specified directory in HDFS? When I do hadoop fs -ls /mydir/ , I get a Java heap space error, since the directory is too big, but what I am interested in is the number of subdirectories in that directory. I tried: gsamaras@gwta3000 ~]$ hadoop fs -find /mydir/ -maxdepth 1 -type d -print| wc -l find: Unexpected argument: -maxdepth 0 I know that the directory is not empty, thus 0 is not correct: [gsamaras@gwta3000 ~]$ hadoop fs -du -s -h /mydir 737.5 G /mydir
linux, hadoop, apache-spark, hdfs, redhat
4
793
1
https://stackoverflow.com/questions/38856659/number-of-subdirectories-in-a-directory
36,732,116
JavaFX Dialog and Alert appear behind main stage in RedHat
I am making use of JavaFX's built in Alert and Dialog classes which work great in Windows and when running from Eclipse within Windows, but appear behind the parent window when running on my target hardware which is running RedHat 6. I have tried tweaking various things including: primaryStage.initStyle(StageStyle.UNDECORATED); primaryStage.setFullScreen(true); alert.initOwner(primaryStage) and alert.initOwner(primaryStage.getOwner()) alert.initModality(Modality.WINDOW_MODAL) and alert.initModality(Modality.APPLICATION_MODAL) alert.initStyle(StageStyle.***) with *** being all possible styles. The only way I have been able to get the alerts and dialogs to remain on top is by calling alert.initStyle(StageStyle.UTILITY) however this creates a window with a cross button which I do not want. Ideally I would prefer a bordered window without additional buttons, or an undecorated window which I should then be able to style to achieve the bordered look. I have read of similar issues in which using Windows doesn't work but Ubuntu does. I haven't been able to find any open issues or solutions in this case. I am using Java 8 Update 77.
JavaFX Dialog and Alert appear behind main stage in RedHat I am making use of JavaFX's built in Alert and Dialog classes which work great in Windows and when running from Eclipse within Windows, but appear behind the parent window when running on my target hardware which is running RedHat 6. I have tried tweaking various things including: primaryStage.initStyle(StageStyle.UNDECORATED); primaryStage.setFullScreen(true); alert.initOwner(primaryStage) and alert.initOwner(primaryStage.getOwner()) alert.initModality(Modality.WINDOW_MODAL) and alert.initModality(Modality.APPLICATION_MODAL) alert.initStyle(StageStyle.***) with *** being all possible styles. The only way I have been able to get the alerts and dialogs to remain on top is by calling alert.initStyle(StageStyle.UTILITY) however this creates a window with a cross button which I do not want. Ideally I would prefer a bordered window without additional buttons, or an undecorated window which I should then be able to style to achieve the bordered look. I have read of similar issues in which using Windows doesn't work but Ubuntu does. I haven't been able to find any open issues or solutions in this case. I am using Java 8 Update 77.
java, javafx, dialog, alert, redhat
4
647
1
https://stackoverflow.com/questions/36732116/javafx-dialog-and-alert-appear-behind-main-stage-in-redhat
25,601,550
why my frame-title-format does not work?
my emacs' version: GNU Emacs 24.3.1 (x86_64-redhat-linux-gnu, GTK+ Version 3.10.9) of 2014-05-21 on buildvm-07.phx2.fedoraproject.org I hope Emacs's title display the absolute path of the current file. I wrote the following contents (from internet): ;;;Emacs title bar to reflect file name (defun frame-title-string () "Return the file name of current buffer, using ~ if under home directory" (let ((fname (or (buffer-file-name (current-buffer)) (buffer-name)))) ;;let body (when (string-match (getenv "HOME") fname) (setq fname (replace-match "~" t t fname)) ) fname)) ;;; Title = 'system-name File: foo.bar' (setq frame-title-format '("" system-name " File: "(:eval (frame-title-string)))) before reinstalling FC20 + Emacs, the above content was able to work correctly. Now it does not work except that I open .emacs and eval frame-title-format manually, I do not know why I must manually eval it?
why my frame-title-format does not work? my emacs' version: GNU Emacs 24.3.1 (x86_64-redhat-linux-gnu, GTK+ Version 3.10.9) of 2014-05-21 on buildvm-07.phx2.fedoraproject.org I hope Emacs's title display the absolute path of the current file. I wrote the following contents (from internet): ;;;Emacs title bar to reflect file name (defun frame-title-string () "Return the file name of current buffer, using ~ if under home directory" (let ((fname (or (buffer-file-name (current-buffer)) (buffer-name)))) ;;let body (when (string-match (getenv "HOME") fname) (setq fname (replace-match "~" t t fname)) ) fname)) ;;; Title = 'system-name File: foo.bar' (setq frame-title-format '("" system-name " File: "(:eval (frame-title-string)))) before reinstalling FC20 + Emacs, the above content was able to work correctly. Now it does not work except that I open .emacs and eval frame-title-format manually, I do not know why I must manually eval it?
emacs, centos, elisp, fedora, redhat
4
788
4
https://stackoverflow.com/questions/25601550/why-my-frame-title-format-does-not-work
57,895,050
How to enable Hot deployment in JBoss using Redhat server connector extension in VSCode
I have made a small maven based web application in VSCode and trying to deploy it on JBoss using the Redhat Server Connector Extension. But the hot deployment of the class files does not work in simple running JBoss server. But Hot deployment does work in debug mode as 'Hot Code Replace' by setting the property 'java.debug.settings.hotCodeReplace' to 'auto' . My inputs are from below links: [URL] and other SO links like: How do I get Java "hot code replacement" working in JBoss? Hot deploy on JBoss - how do I make JBoss "see" the change? But it couldn't help. Can you suggest something more about how it is simply possible in running JBoss? (PS: Auto Build feature in VSCode is already enabled. And It works fine in eclipse).
How to enable Hot deployment in JBoss using Redhat server connector extension in VSCode I have made a small maven based web application in VSCode and trying to deploy it on JBoss using the Redhat Server Connector Extension. But the hot deployment of the class files does not work in simple running JBoss server. But Hot deployment does work in debug mode as 'Hot Code Replace' by setting the property 'java.debug.settings.hotCodeReplace' to 'auto' . My inputs are from below links: [URL] and other SO links like: How do I get Java "hot code replacement" working in JBoss? Hot deploy on JBoss - how do I make JBoss "see" the change? But it couldn't help. Can you suggest something more about how it is simply possible in running JBoss? (PS: Auto Build feature in VSCode is already enabled. And It works fine in eclipse).
java, visual-studio-code, jboss, redhat
4
8,694
1
https://stackoverflow.com/questions/57895050/how-to-enable-hot-deployment-in-jboss-using-redhat-server-connector-extension-in
51,653,273
What are the exit codes in the RPM binary?
Each time rpm command returns different exit codes. For example- In case of failed dependency sometimes echo $? gives 1 and sometime 5. Can someone explain this?
What are the exit codes in the RPM binary? Each time rpm command returns different exit codes. For example- In case of failed dependency sometimes echo $? gives 1 and sometime 5. Can someone explain this?
linux, centos, redhat, rpm, suse
4
5,148
1
https://stackoverflow.com/questions/51653273/what-are-the-exit-codes-in-the-rpm-binary
47,837,800
Pam authentication, try first local user and then LDAP
I set up a pam authentication thowards Oracle Unified Directory on RH5 using the nslcd deamon. I would like the authentication to first try for local users and then if no users found try to contact the LDAP. So I edited the /etc/nsswitch.conf in this way: passwd: files ldap shadow: files ldap group: files ldap But it seems this is not working since if the LDAP server is down, I'm not able to login to the server. Am I missing something? EDIT: This is my PAM /etc/pam.d/system-auth (I'm not using sssd, only nslcd). #%PAM-1.0 # This file is auto-generated. auth required pam_env.so auth sufficient pam_unix.so nullok auth sufficient pam_ldap.so use_first_pass ignore_authinfo_unavail auth required pam_deny.so account required pam_unix.so broken_shadow account required pam_ldap.so ignore_unknown_user ignore_authinfo_unavail account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password required pam_ldap.so try_first_pass ignore_unknown_user ignore_authinfo_unavail password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session optional pam_mkhomedir.so skel=/etc/skel umask=077 session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_ldap.so ignore_authinfo_unavail I set the system-auth at debug and this is the result: Dec 20 17:46:38 <hostname> nscd: nss_ldap: failed to bind to LDAP server ldap://<dns_1>:3389: Can't contact LDAP server Dec 20 17:46:38 <hostname> nscd: nss_ldap: failed to bind to LDAP server ldap://<dns_2>:3389: Can't contact LDAP server Dec 20 17:46:38 <hostname> nscd: nss_ldap: failed to bind to LDAP server ldap://<ip_1>:3389: Can't contact LDAP server Dec 20 17:46:38 <hostname> nscd: nss_ldap: failed to bind to LDAP server ldap://<ip_2>:3389: Can't contact LDAP server
Pam authentication, try first local user and then LDAP I set up a pam authentication thowards Oracle Unified Directory on RH5 using the nslcd deamon. I would like the authentication to first try for local users and then if no users found try to contact the LDAP. So I edited the /etc/nsswitch.conf in this way: passwd: files ldap shadow: files ldap group: files ldap But it seems this is not working since if the LDAP server is down, I'm not able to login to the server. Am I missing something? EDIT: This is my PAM /etc/pam.d/system-auth (I'm not using sssd, only nslcd). #%PAM-1.0 # This file is auto-generated. auth required pam_env.so auth sufficient pam_unix.so nullok auth sufficient pam_ldap.so use_first_pass ignore_authinfo_unavail auth required pam_deny.so account required pam_unix.so broken_shadow account required pam_ldap.so ignore_unknown_user ignore_authinfo_unavail account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password required pam_ldap.so try_first_pass ignore_unknown_user ignore_authinfo_unavail password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session optional pam_mkhomedir.so skel=/etc/skel umask=077 session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_ldap.so ignore_authinfo_unavail I set the system-auth at debug and this is the result: Dec 20 17:46:38 <hostname> nscd: nss_ldap: failed to bind to LDAP server ldap://<dns_1>:3389: Can't contact LDAP server Dec 20 17:46:38 <hostname> nscd: nss_ldap: failed to bind to LDAP server ldap://<dns_2>:3389: Can't contact LDAP server Dec 20 17:46:38 <hostname> nscd: nss_ldap: failed to bind to LDAP server ldap://<ip_1>:3389: Can't contact LDAP server Dec 20 17:46:38 <hostname> nscd: nss_ldap: failed to bind to LDAP server ldap://<ip_2>:3389: Can't contact LDAP server
unix, authentication, ldap, redhat, pam
4
10,068
3
https://stackoverflow.com/questions/47837800/pam-authentication-try-first-local-user-and-then-ldap
37,733,756
How to install expect and tcl on linux RHEL server 6.5
I am new to linux and i have few expect scripts to execute. I read few blogs on how to install expect and tcl. The command i am trying is sudo yum install expect sudo yum install tcl I am getting No package expect available No package tcl available It seems RHEL should have tcl and expect prebuilt but this is not the case in my version of linux. How should i proceed from here ? Help will be highly appreciated..Thanks :)
How to install expect and tcl on linux RHEL server 6.5 I am new to linux and i have few expect scripts to execute. I read few blogs on how to install expect and tcl. The command i am trying is sudo yum install expect sudo yum install tcl I am getting No package expect available No package tcl available It seems RHEL should have tcl and expect prebuilt but this is not the case in my version of linux. How should i proceed from here ? Help will be highly appreciated..Thanks :)
tcl, redhat, expect
4
52,996
6
https://stackoverflow.com/questions/37733756/how-to-install-expect-and-tcl-on-linux-rhel-server-6-5
32,497,513
How to properly install python-devel on RedHat x86_64?
When installing python-devel with yum install python-devel.x86_64 I got this error: Resolving Dependencies --> Running transaction check ---> Package python-devel.x86_64 0:2.6.6-36.el6 will be installed --> Processing Dependency: python(x86-64) = 2.6.6-36.el6 for package: python-devel-2.6.6-36.el6.x86_64 --> Finished Dependency Resolution Error: Package: python-devel-2.6.6-36.el6.x86_64 (tmp1) Requires: python(x86-64) = 2.6.6-36.el6 Installed: python-2.6.6-52.el6.x86_64 (@rhel-x86_64-server-6) python(x86-64) = 2.6.6-52.el6 Available: python-2.6.6-36.el6.x86_64 (tmp1) python(x86-64) = 2.6.6-36.el6 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Has anybody an idea how to get python-devel installed?
How to properly install python-devel on RedHat x86_64? When installing python-devel with yum install python-devel.x86_64 I got this error: Resolving Dependencies --> Running transaction check ---> Package python-devel.x86_64 0:2.6.6-36.el6 will be installed --> Processing Dependency: python(x86-64) = 2.6.6-36.el6 for package: python-devel-2.6.6-36.el6.x86_64 --> Finished Dependency Resolution Error: Package: python-devel-2.6.6-36.el6.x86_64 (tmp1) Requires: python(x86-64) = 2.6.6-36.el6 Installed: python-2.6.6-52.el6.x86_64 (@rhel-x86_64-server-6) python(x86-64) = 2.6.6-52.el6 Available: python-2.6.6-36.el6.x86_64 (tmp1) python(x86-64) = 2.6.6-36.el6 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Has anybody an idea how to get python-devel installed?
python, redhat
4
16,150
3
https://stackoverflow.com/questions/32497513/how-to-properly-install-python-devel-on-redhat-x86-64
71,252,759
Red Hat &gt;= What is the difference between dnf module install nginx and dnf install nginx
Good day, if I type dnf module list nginx, I get a listing of more recent nginx versions than if I type dnf list --showduplicates nginx. Can you tell me what is the difference between these two types of installation, because it is not clear to me what is the difference.
Red Hat &gt;= What is the difference between dnf module install nginx and dnf install nginx Good day, if I type dnf module list nginx, I get a listing of more recent nginx versions than if I type dnf list --showduplicates nginx. Can you tell me what is the difference between these two types of installation, because it is not clear to me what is the difference.
redhat
4
318
0
https://stackoverflow.com/questions/71252759/red-hat-what-is-the-difference-between-dnf-module-install-nginx-and-dnf-insta
65,671,319
Unable to install libgit2 library on AMI ec2 instance
I am trying to install devtools in a R ami on an amazon ec2 instance. However, before devtools, I need to install some variation of a libgit2 library as a dependency. Given that the amazon distro is red hat, I've tried to install the libgit2-devel variation with the command sudo yum install libgit2-devel everytime i get a error message that the package is not available. I have no idea why this is happening.
Unable to install libgit2 library on AMI ec2 instance I am trying to install devtools in a R ami on an amazon ec2 instance. However, before devtools, I need to install some variation of a libgit2 library as a dependency. Given that the amazon distro is red hat, I've tried to install the libgit2-devel variation with the command sudo yum install libgit2-devel everytime i get a error message that the package is not available. I have no idea why this is happening.
r, linux, amazon-ec2, redhat, libgit2
4
481
0
https://stackoverflow.com/questions/65671319/unable-to-install-libgit2-library-on-ami-ec2-instance
61,584,873
Import and Run Tensorflow 2 on linux machine that does not support AVX instructions
I am on Red Hat Enterprise Linux Server release 7.7 and have installed TensorFlow 2.1.0 on this machine. Whenever I try to import TensorFlow as follows: import tensorFlow as tf It gives the following error: Illegal instruction (core dumped) I have done some research and figured that it happens because my machine does not support AVX. I found a link that solves similar issue on a windows machine. I was wondering if there is any way to solve it on a Linux machine? I used more /proc/cpuinfo | grep flags to get the flags supported by my CPU. Followings are the flags supported on my machine: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf eagerfpu pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm tpr_shadow vnmi flexpriority dtherm I know that the problem will be gone if I use tensorflow version 1.5, but at this point I am cannot downgrade it to 1.5. Is there any way to import and run tensorflow 2.1.0 on a machine that does not support AVX instructions?
Import and Run Tensorflow 2 on linux machine that does not support AVX instructions I am on Red Hat Enterprise Linux Server release 7.7 and have installed TensorFlow 2.1.0 on this machine. Whenever I try to import TensorFlow as follows: import tensorFlow as tf It gives the following error: Illegal instruction (core dumped) I have done some research and figured that it happens because my machine does not support AVX. I found a link that solves similar issue on a windows machine. I was wondering if there is any way to solve it on a Linux machine? I used more /proc/cpuinfo | grep flags to get the flags supported by my CPU. Followings are the flags supported on my machine: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf eagerfpu pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm tpr_shadow vnmi flexpriority dtherm I know that the problem will be gone if I use tensorflow version 1.5, but at this point I am cannot downgrade it to 1.5. Is there any way to import and run tensorflow 2.1.0 on a machine that does not support AVX instructions?
redhat, avx, tensorflow2.x
4
261
0
https://stackoverflow.com/questions/61584873/import-and-run-tensorflow-2-on-linux-machine-that-does-not-support-avx-instructi
59,680,113
How to use Python or pyodbc module with proxy settings?
I am working on a project that is deployed on an On-Premise VM. The on-prem VM is using a proxy setting to connect to the Internet. Now I have a Python script that connects to Azure SQL Server using the pyodbc module. I am able to ping and telnet various sources to check my internet connectivity but I'm unable to go to the Azure SQL Server. I have tried various approaches like enabling and disabling proxy settings over both HTTP and HTTPS, enabling 1433 port on the firewall and working with my office network team to resolve this. How can I resolve this issue? Do we have any way to use proxy with either Python or pyodbc?
How to use Python or pyodbc module with proxy settings? I am working on a project that is deployed on an On-Premise VM. The on-prem VM is using a proxy setting to connect to the Internet. Now I have a Python script that connects to Azure SQL Server using the pyodbc module. I am able to ping and telnet various sources to check my internet connectivity but I'm unable to go to the Azure SQL Server. I have tried various approaches like enabling and disabling proxy settings over both HTTP and HTTPS, enabling 1433 port on the firewall and working with my office network team to resolve this. How can I resolve this issue? Do we have any way to use proxy with either Python or pyodbc?
python, proxy, centos, redhat, pyodbc
4
1,931
0
https://stackoverflow.com/questions/59680113/how-to-use-python-or-pyodbc-module-with-proxy-settings
53,097,005
How to use refresh token by using Keycloak Admin REST API
I'm trying to use Keycloak Admin REST API SDK . Then, I want to use refresh token to generate access token. I know that the following code code can generate access token by refresh token. val token = Keycloak.getInstance(serverUrl, realmName, userName, password, clientId, clientSecret) .tokenManager() .refreshToken() But this code can only work on TokenManager. I want to use any refresh token like following code by admin user. // I hope to call token endpoint -> /realms/{realm}/protocol/openid-connect/token?grant_type=refresh_token... val accessToken = tokenManager.refresh(String refreshToken); Could any SDK Class refresh token as I want?
How to use refresh token by using Keycloak Admin REST API I'm trying to use Keycloak Admin REST API SDK . Then, I want to use refresh token to generate access token. I know that the following code code can generate access token by refresh token. val token = Keycloak.getInstance(serverUrl, realmName, userName, password, clientId, clientSecret) .tokenManager() .refreshToken() But this code can only work on TokenManager. I want to use any refresh token like following code by admin user. // I hope to call token endpoint -> /realms/{realm}/protocol/openid-connect/token?grant_type=refresh_token... val accessToken = tokenManager.refresh(String refreshToken); Could any SDK Class refresh token as I want?
java, jboss, redhat, keycloak, redhat-sso
4
2,361
0
https://stackoverflow.com/questions/53097005/how-to-use-refresh-token-by-using-keycloak-admin-rest-api
50,903,616
Apache 2.4 (RedHat) - .well-known - 403 forbidden
There is a rule in our apache config that forbids apache to server directories starting with . (for security reasons) I've read many threads on how to override this in some cases, but I still get 403 I need to validate a globalsign certificate with the .well-known/pki-validation file but it cannot be accessed I tried this : <DirectoryMatch "/var/www/html/.../.well-known/pki-validation"> Require all granted </DirectoryMatch> I also tried this : RewriteRule ^.well-known/pki-validation$ well-known/pki-validation nothing works so far here is the apache config LoadModule authz_core_module modules/mod_authz_core.so LoadModule mime_module modules/mod_mime.so LoadModule headers_module modules/mod_headers.so LoadModule rewrite_module modules/mod_rewrite.so RewriteEngine on LoadModule version_module modules/mod_version.so # LoadModule ssl_module modules/mod_ssl.so <IfModule mod_ssl.c> LoadModule socache_shmcb_module modules/mod_socache_shmcb.so </IfModule> # For Redhat <IfModule !mpm_winnt_module> LoadModule systemd_module modules/mod_systemd.so LoadModule unixd_module modules/mod_unixd.so # If using php-pfm, we can use mod_mpm_event which is more efficient LoadModule mpm_prefork_module modules/mod_mpm_prefork.so #LoadModule mpm_event_module modules/mod_mpm_event.so # LoadModule php5_module modules/libphp5.so <IfModule !mod_php5.c> <IfModule prefork.c> LoadModule php7_module /opt/rh/httpd24/root/etc/httpd/modules/librh-php70-php7.so </IfModule> </IfModule> </IfModule> # Allow use of macros for consistency LoadModule macro_module /usr/local/lib64/httpd/modules/mod_macro.so #LoadModule macro_module /opt/rh/httpd24/root/etc/httpd/modules/mod_macro.so MacroIgnoreBadNesting #MacroIgnoreEmptyArgs # Compress content before delivering to client #LoadModule deflate_module modules/mod_deflate.so LoadModule log_config_module modules/mod_log_config.so #LoadModule mime_module modules/mod_mime.so #LoadModule env_module modules/mod_env.so #LoadModule setenvif_module modules/mod_setenvif.so #LoadModule unique_id_module modules/mod_unique_id.so # ================== Server info ================== Define logroot /var/log/httpd Define docroot /var/www/html # DocumentRoot: The directory containing documents => overwritten in vhosts # DocumentRoot: The directory containing documents => overwritten in vhosts DocumentRoot ${docroot} # The following directives define some format nicknames for use with # a CustomLog directive. # # These deviate from the Common Log Format definitions in that they use %O # (the actual bytes sent including headers) instead of %b (the size of the # requested file), because the latter makes it impossible to detect partial # requests. # # Note that the use of %{X-Forwarded-For}i instead of %h is not recommended. # Use mod_remoteip instead. <IfModule log_config_module> # %O removed because needs mod_log_io LogFormat "%v:%p %h %l %u %t \"%r\" %>s \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent </IfModule> # ErrorLog: The location of the log fils => overwritten in vhosts ErrorLog ${logroot}/error-default.log <IfModule log_config_module> CustomLog ${logroot}/access-default.log combined </IfModule> # LogLevel: Control the number of messages logged to the error log. # Possible values: debug, info, notice, warn, error, crit, alert, emerg. LogLevel warn # timeout (s) after which gracefully shutdown server will exit (0 means never exit) #GracefulShutDownTimeout 30 # Wait up to X seconds for slow clients requests TimeOut 30 # Security settings --------------------------------------------------------- ServerSignature Off ServerTokens Prod # User/Group: The name (or #number) of the user/group to run httpd as. <IfModule !mpm_winnt_module> User apache Group apache # Put Apache in jail ChrootDir "/data/apache" </IfModule> # This has to stay off to not give information about the system <IfModule status_module> ExtendedStatus Off </IfModule> <Directory /> # Disallow .htaccess files AllowOverride None # Allow only basic methods <LimitExcept GET POST OPTIONS HEAD> Require all denied </LimitExcept> </Directory> # Allow symlinks, but only if same owner Options +FollowSymLinks -SymLinksIfOwnerMatch # Prevent access to .htaccess, .htpasswd, .svn, ... <LocationMatch "/[.]"> Require all denied </LocationMatch> # MacOS system dir <LocationMatch "DS_Store"> Require all denied </LocationMatch> # In case no whitelist is applied <LocationMatch "[.](?i:bak|bk!|sql)$"> Require all denied </LocationMatch> #Require all denied #<Directory /var/www/html> # Require all granted #</Directory> # Remove header containing version number Header unset X-Powered-By # Performance/resource ------------------------------------------ # Set keep-alive timeout KeepAliveTimeout 5 # Unlimited numbers of keep-alive requests (only restricted by time-out) MaxKeepAliveRequests 100 # To recycle memory after X connections (to one process) #MaxRequestsPerChild 40000 # Processes & Threads manipulation --------------------------------------------- # Common directives for worker (multi-thread) & prefork (single thread) # default values are given in parenthesis: worker/prefork # 2.4: "event" is an enhanced version of "worker" # Max. number of processes (16/256) #ServerLimit 80 #StartServers 16 # Limit resources of external processes (CGI, etc.) #RLimitCPU seconds|max [seconds|max] #RLimitMEM bytes|max [bytes|max] #RLimitNPROC number|max [number|max] # No multi-threading - default for Redhat/CentOS # Can be changed in /etc/sysconfig/httpd <IfModule prefork.c> #MinSpareServers 16 #MaxSpareServers 32 </IfModule> # Multi-threading - httpd 2.2 <IfModule worker.c> # High number -> lower memory but (a bit) less responsive and more impact #ThreadsPerChild 25 # Max. number of concurrent requests * # default = ServerLimit (16) * ThreadsPerChild (25) #MaxClients 400 # MinSpareThreads (multiple of ThreadsPerChild) - def: min_servers * ThreadsPerChild #MinSpareThreads 150 # MaxSpareThreads (multiple of ThreadsPerChild) - def: ServerLimit * ThreadsPerChild #MaxSpareThreads 250 </IfModule> # Multi-threading - httpd 2.4 (more efficient that worker) <IfModule event.c> # High number -> lower memory but (a bit) less responsive and more impact #ThreadsPerChild 25 # Max. number of concurrent requests * KeepAliveTimeout # default = ServerLimit (16) * ThreadsPerChild (25) #MaxClients 400 # MinSpareThreads (multiple of ThreadsPerChild) - def: min_servers * ThreadsPerChild #MinSpareThreads 150 # MaxSpareThreads (multiple of ThreadsPerChild) - def: ServerLimit * ThreadsPerChild #MaxSpareThreads 250 </IfModule> # Multi-threading - Windows (only one child process) <IfModule mpm_winnt_module> # High number -> lower memory but (a bit) slower and more impact #ThreadsPerChild 64 </IfModule> # Include files ---------------------------------------------------------------- # SSL/TLS <IfModule mod_ssl.c> Include conf/ssl.conf </IfModule> # Generic macros reused somewhere else Include conf/macros.conf # Generic macros reused somewhere else Include conf/php.conf LoadModule dir_module modules/mod_dir.so LoadModule alias_module modules/mod_alias.so # Include chroot Include conf/chroot.conf # Include additional vhosts Include conf/vhosts.conf
Apache 2.4 (RedHat) - .well-known - 403 forbidden There is a rule in our apache config that forbids apache to server directories starting with . (for security reasons) I've read many threads on how to override this in some cases, but I still get 403 I need to validate a globalsign certificate with the .well-known/pki-validation file but it cannot be accessed I tried this : <DirectoryMatch "/var/www/html/.../.well-known/pki-validation"> Require all granted </DirectoryMatch> I also tried this : RewriteRule ^.well-known/pki-validation$ well-known/pki-validation nothing works so far here is the apache config LoadModule authz_core_module modules/mod_authz_core.so LoadModule mime_module modules/mod_mime.so LoadModule headers_module modules/mod_headers.so LoadModule rewrite_module modules/mod_rewrite.so RewriteEngine on LoadModule version_module modules/mod_version.so # LoadModule ssl_module modules/mod_ssl.so <IfModule mod_ssl.c> LoadModule socache_shmcb_module modules/mod_socache_shmcb.so </IfModule> # For Redhat <IfModule !mpm_winnt_module> LoadModule systemd_module modules/mod_systemd.so LoadModule unixd_module modules/mod_unixd.so # If using php-pfm, we can use mod_mpm_event which is more efficient LoadModule mpm_prefork_module modules/mod_mpm_prefork.so #LoadModule mpm_event_module modules/mod_mpm_event.so # LoadModule php5_module modules/libphp5.so <IfModule !mod_php5.c> <IfModule prefork.c> LoadModule php7_module /opt/rh/httpd24/root/etc/httpd/modules/librh-php70-php7.so </IfModule> </IfModule> </IfModule> # Allow use of macros for consistency LoadModule macro_module /usr/local/lib64/httpd/modules/mod_macro.so #LoadModule macro_module /opt/rh/httpd24/root/etc/httpd/modules/mod_macro.so MacroIgnoreBadNesting #MacroIgnoreEmptyArgs # Compress content before delivering to client #LoadModule deflate_module modules/mod_deflate.so LoadModule log_config_module modules/mod_log_config.so #LoadModule mime_module modules/mod_mime.so #LoadModule env_module modules/mod_env.so #LoadModule setenvif_module modules/mod_setenvif.so #LoadModule unique_id_module modules/mod_unique_id.so # ================== Server info ================== Define logroot /var/log/httpd Define docroot /var/www/html # DocumentRoot: The directory containing documents => overwritten in vhosts # DocumentRoot: The directory containing documents => overwritten in vhosts DocumentRoot ${docroot} # The following directives define some format nicknames for use with # a CustomLog directive. # # These deviate from the Common Log Format definitions in that they use %O # (the actual bytes sent including headers) instead of %b (the size of the # requested file), because the latter makes it impossible to detect partial # requests. # # Note that the use of %{X-Forwarded-For}i instead of %h is not recommended. # Use mod_remoteip instead. <IfModule log_config_module> # %O removed because needs mod_log_io LogFormat "%v:%p %h %l %u %t \"%r\" %>s \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent </IfModule> # ErrorLog: The location of the log fils => overwritten in vhosts ErrorLog ${logroot}/error-default.log <IfModule log_config_module> CustomLog ${logroot}/access-default.log combined </IfModule> # LogLevel: Control the number of messages logged to the error log. # Possible values: debug, info, notice, warn, error, crit, alert, emerg. LogLevel warn # timeout (s) after which gracefully shutdown server will exit (0 means never exit) #GracefulShutDownTimeout 30 # Wait up to X seconds for slow clients requests TimeOut 30 # Security settings --------------------------------------------------------- ServerSignature Off ServerTokens Prod # User/Group: The name (or #number) of the user/group to run httpd as. <IfModule !mpm_winnt_module> User apache Group apache # Put Apache in jail ChrootDir "/data/apache" </IfModule> # This has to stay off to not give information about the system <IfModule status_module> ExtendedStatus Off </IfModule> <Directory /> # Disallow .htaccess files AllowOverride None # Allow only basic methods <LimitExcept GET POST OPTIONS HEAD> Require all denied </LimitExcept> </Directory> # Allow symlinks, but only if same owner Options +FollowSymLinks -SymLinksIfOwnerMatch # Prevent access to .htaccess, .htpasswd, .svn, ... <LocationMatch "/[.]"> Require all denied </LocationMatch> # MacOS system dir <LocationMatch "DS_Store"> Require all denied </LocationMatch> # In case no whitelist is applied <LocationMatch "[.](?i:bak|bk!|sql)$"> Require all denied </LocationMatch> #Require all denied #<Directory /var/www/html> # Require all granted #</Directory> # Remove header containing version number Header unset X-Powered-By # Performance/resource ------------------------------------------ # Set keep-alive timeout KeepAliveTimeout 5 # Unlimited numbers of keep-alive requests (only restricted by time-out) MaxKeepAliveRequests 100 # To recycle memory after X connections (to one process) #MaxRequestsPerChild 40000 # Processes & Threads manipulation --------------------------------------------- # Common directives for worker (multi-thread) & prefork (single thread) # default values are given in parenthesis: worker/prefork # 2.4: "event" is an enhanced version of "worker" # Max. number of processes (16/256) #ServerLimit 80 #StartServers 16 # Limit resources of external processes (CGI, etc.) #RLimitCPU seconds|max [seconds|max] #RLimitMEM bytes|max [bytes|max] #RLimitNPROC number|max [number|max] # No multi-threading - default for Redhat/CentOS # Can be changed in /etc/sysconfig/httpd <IfModule prefork.c> #MinSpareServers 16 #MaxSpareServers 32 </IfModule> # Multi-threading - httpd 2.2 <IfModule worker.c> # High number -> lower memory but (a bit) less responsive and more impact #ThreadsPerChild 25 # Max. number of concurrent requests * # default = ServerLimit (16) * ThreadsPerChild (25) #MaxClients 400 # MinSpareThreads (multiple of ThreadsPerChild) - def: min_servers * ThreadsPerChild #MinSpareThreads 150 # MaxSpareThreads (multiple of ThreadsPerChild) - def: ServerLimit * ThreadsPerChild #MaxSpareThreads 250 </IfModule> # Multi-threading - httpd 2.4 (more efficient that worker) <IfModule event.c> # High number -> lower memory but (a bit) less responsive and more impact #ThreadsPerChild 25 # Max. number of concurrent requests * KeepAliveTimeout # default = ServerLimit (16) * ThreadsPerChild (25) #MaxClients 400 # MinSpareThreads (multiple of ThreadsPerChild) - def: min_servers * ThreadsPerChild #MinSpareThreads 150 # MaxSpareThreads (multiple of ThreadsPerChild) - def: ServerLimit * ThreadsPerChild #MaxSpareThreads 250 </IfModule> # Multi-threading - Windows (only one child process) <IfModule mpm_winnt_module> # High number -> lower memory but (a bit) slower and more impact #ThreadsPerChild 64 </IfModule> # Include files ---------------------------------------------------------------- # SSL/TLS <IfModule mod_ssl.c> Include conf/ssl.conf </IfModule> # Generic macros reused somewhere else Include conf/macros.conf # Generic macros reused somewhere else Include conf/php.conf LoadModule dir_module modules/mod_dir.so LoadModule alias_module modules/mod_alias.so # Include chroot Include conf/chroot.conf # Include additional vhosts Include conf/vhosts.conf
apache, redhat
4
2,447
0
https://stackoverflow.com/questions/50903616/apache-2-4-redhat-well-known-403-forbidden
50,158,833
ModuleNotFoundError: No module named &#39;myproject.wsgi&#39; - Gunicorn, Redhat 7, Django 2.0
I am trying to deploy a Django Application on RHEL 7. I have setup a virtualenv with Python 3.6 Here is my executable gunicorn_start file. #!/bin/bash NAME="Garage" DJANGODIR=/opt/garage/garage USER=user1 GROUP=user1 WORKERS=3 BIND=unix:/opt/garage/run/gunicorn.sock DJANGO_SETTINGS_MODULE=garage.settings DJANGO_WSGI_MODULE=garage.wsgi LOGLEVEL=error cd $DJANGODIR source venv/bin/activate export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE export PYTHONPATH=$DJANGODIR:$PYTHONPATH exec venv/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \ --name $NAME \ --workers $WORKERS \ --user=$USER \ --group=$GROUP \ --bind=$BIND \ --log-level=$LOGLEVEL \ --log-file=- Here is my gunicorn.service file [Unit] Description=gunicorn daemon After=network.target [Service] User=user1 Group=user1 WorkingDirectory=/opt/garage ExecStart=/opt/garage/gunicorn_start [Install] WantedBy=multi-user.target I start gunicorn with this commands sudo systemctl start gunicorn sudo systemctl enable gunicorn After starting Gunicorn i check status with sudo systemctl enable gunicorn I get the error ModuleNotFoundError: No module named 'garage.wsgi'
ModuleNotFoundError: No module named &#39;myproject.wsgi&#39; - Gunicorn, Redhat 7, Django 2.0 I am trying to deploy a Django Application on RHEL 7. I have setup a virtualenv with Python 3.6 Here is my executable gunicorn_start file. #!/bin/bash NAME="Garage" DJANGODIR=/opt/garage/garage USER=user1 GROUP=user1 WORKERS=3 BIND=unix:/opt/garage/run/gunicorn.sock DJANGO_SETTINGS_MODULE=garage.settings DJANGO_WSGI_MODULE=garage.wsgi LOGLEVEL=error cd $DJANGODIR source venv/bin/activate export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE export PYTHONPATH=$DJANGODIR:$PYTHONPATH exec venv/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \ --name $NAME \ --workers $WORKERS \ --user=$USER \ --group=$GROUP \ --bind=$BIND \ --log-level=$LOGLEVEL \ --log-file=- Here is my gunicorn.service file [Unit] Description=gunicorn daemon After=network.target [Service] User=user1 Group=user1 WorkingDirectory=/opt/garage ExecStart=/opt/garage/gunicorn_start [Install] WantedBy=multi-user.target I start gunicorn with this commands sudo systemctl start gunicorn sudo systemctl enable gunicorn After starting Gunicorn i check status with sudo systemctl enable gunicorn I get the error ModuleNotFoundError: No module named 'garage.wsgi'
python, django, redhat, gunicorn
4
440
0
https://stackoverflow.com/questions/50158833/modulenotfounderror-no-module-named-myproject-wsgi-gunicorn-redhat-7-djan
45,864,844
Ovirt Hosted-Engine Deleted
Dear lovely stackoverflow community, Yesterday I accidentally deleted my ovirt-engine-domain. --> surprisingly the whole folder is gone, and its not restoreable Of course my VMs are still running, but I don't have any backup of my self-hosted-engine . :( I have decided to install a new engine this weekend. I would appreciate any recommendations. My Plan: reinstall everything :( Im Using CentOS 7.3 and was using Ovirt 4.1 Is there a possibility to re-install a new Hosted-Engine without a fresh OS? Is there a possibility to clean a Ovirt-Node so I can use it in another Engine-environment? Does anybody have any experiences with this: [URL] ovirt-hosted-engine-cleanup? ovirt-hosted-engine-deploy answerfile?
Ovirt Hosted-Engine Deleted Dear lovely stackoverflow community, Yesterday I accidentally deleted my ovirt-engine-domain. --> surprisingly the whole folder is gone, and its not restoreable Of course my VMs are still running, but I don't have any backup of my self-hosted-engine . :( I have decided to install a new engine this weekend. I would appreciate any recommendations. My Plan: reinstall everything :( Im Using CentOS 7.3 and was using Ovirt 4.1 Is there a possibility to re-install a new Hosted-Engine without a fresh OS? Is there a possibility to clean a Ovirt-Node so I can use it in another Engine-environment? Does anybody have any experiences with this: [URL] ovirt-hosted-engine-cleanup? ovirt-hosted-engine-deploy answerfile?
linux, centos, redhat, kvm, ovirt
4
888
0
https://stackoverflow.com/questions/45864844/ovirt-hosted-engine-deleted
44,363,145
Uninstall older unixODBC completely and install 2.3.2 unixODBC in redhat 6.3
I am trying to install msodbcsql v13 in redhat 6.3. It shows dependency error for unixODBC(64 bit) >= 2.3.1 needs to be installed before installing msodbcsql. I tried running below command, odbcinst -j It shows unixODBC 2.3.2 is installed. Also i tried to some other way, yum provides /usr/lib64/odbcinst.so.2.0.0 The above command shows, ODBC version 2.2 is installed. Also if i run yum local install, it shows unixODBC 32 bit version available in machine. To remove unixODBC, i tried the below commands. But not works out. yum remove unixODBC yum erase unixODBC rpm -e unixODBC* rpm rpm -qa | grep unixODBC I want to remove all unixODBC available in the machine. And reinstall the actual version which we required.
Uninstall older unixODBC completely and install 2.3.2 unixODBC in redhat 6.3 I am trying to install msodbcsql v13 in redhat 6.3. It shows dependency error for unixODBC(64 bit) >= 2.3.1 needs to be installed before installing msodbcsql. I tried running below command, odbcinst -j It shows unixODBC 2.3.2 is installed. Also i tried to some other way, yum provides /usr/lib64/odbcinst.so.2.0.0 The above command shows, ODBC version 2.2 is installed. Also if i run yum local install, it shows unixODBC 32 bit version available in machine. To remove unixODBC, i tried the below commands. But not works out. yum remove unixODBC yum erase unixODBC rpm -e unixODBC* rpm rpm -qa | grep unixODBC I want to remove all unixODBC available in the machine. And reinstall the actual version which we required.
redhat, unixodbc
4
14,803
2
https://stackoverflow.com/questions/44363145/uninstall-older-unixodbc-completely-and-install-2-3-2-unixodbc-in-redhat-6-3
40,432,853
no module named urllib3 - trying to install pip
I'm trying to install python 2.7 and pip onto my system. I'm working on redhat linux. Previously I had been following this guide: [URL] which seemed to work for installing python, however while trying to install pip I get the following message: ERROR:root:code for hash md5 was not found. ImportError: No module named urllib3 this is after using curl "[URL] -o "get-pip.py" and then using python get-pip.py I've tried a couple installations of python 2.7.2 and 2.7.9, both have the same problem. I do not have root privileges on this machine. When I ran the configuration file while setting up my python I tried to use --with-ensurepip but obviously I'm doing something wrong.
no module named urllib3 - trying to install pip I'm trying to install python 2.7 and pip onto my system. I'm working on redhat linux. Previously I had been following this guide: [URL] which seemed to work for installing python, however while trying to install pip I get the following message: ERROR:root:code for hash md5 was not found. ImportError: No module named urllib3 this is after using curl "[URL] -o "get-pip.py" and then using python get-pip.py I've tried a couple installations of python 2.7.2 and 2.7.9, both have the same problem. I do not have root privileges on this machine. When I ran the configuration file while setting up my python I tried to use --with-ensurepip but obviously I'm doing something wrong.
python-2.7, pip, redhat
4
871
1
https://stackoverflow.com/questions/40432853/no-module-named-urllib3-trying-to-install-pip
37,225,107
How to add SSL to opensift
Hello I finally got my ssl certificate from wosign (namely: 1_root_bundle.crt and 2_mydomain.com.crt) I don't know how to add them to openshift. I went to the applications and edited mydomain.com There I upload in SSL Certificate the second crt (file2_mydomain.com.crt), on Certificate Private Key I upload my private key (I tried .ppk and .key) and on Private Key Pass Phrase I type the password used for the private key. The probelm is that after saving these I get an error at the Certificate Private Key: "Invalid private key or pass phrase: Could not parse PKey: no start line." I simply wasted 4 hours and don't know how to make it work and couldn't find anything on the internet about this error. I also tried to upload the CSR file in the certificate private key field without success and I used the Apache cert files. Please help
How to add SSL to opensift Hello I finally got my ssl certificate from wosign (namely: 1_root_bundle.crt and 2_mydomain.com.crt) I don't know how to add them to openshift. I went to the applications and edited mydomain.com There I upload in SSL Certificate the second crt (file2_mydomain.com.crt), on Certificate Private Key I upload my private key (I tried .ppk and .key) and on Private Key Pass Phrase I type the password used for the private key. The probelm is that after saving these I get an error at the Certificate Private Key: "Invalid private key or pass phrase: Could not parse PKey: no start line." I simply wasted 4 hours and don't know how to make it work and couldn't find anything on the internet about this error. I also tried to upload the CSR file in the certificate private key field without success and I used the Apache cert files. Please help
ssl, ssl-certificate, openshift, redhat, private-key
4
135
0
https://stackoverflow.com/questions/37225107/how-to-add-ssl-to-opensift
37,215,811
fatal error: limits.h: No such file or directory Theano and anaconda
I have issues with theano. Indeed I have access to a High Performance Computing server with an old version of gcc (4.4.6), python (2.6) and kernel (2.6.32). The issue is that I need to run a python 3 script that uses gensim and teano and I have no root access. To address the issue I installed Anaconda 3. Now my script works using conda-execute but I get the WARNING: g++ not detected, cannot perform optimizations. As a result I used conda install gcc and added conda's gcc path using Theano's cxx flag but now I get the error fatal error: limits.h: No such file or directory.
fatal error: limits.h: No such file or directory Theano and anaconda I have issues with theano. Indeed I have access to a High Performance Computing server with an old version of gcc (4.4.6), python (2.6) and kernel (2.6.32). The issue is that I need to run a python 3 script that uses gensim and teano and I have no root access. To address the issue I installed Anaconda 3. Now my script works using conda-execute but I get the WARNING: g++ not detected, cannot perform optimizations. As a result I used conda install gcc and added conda's gcc path using Theano's cxx flag but now I get the error fatal error: limits.h: No such file or directory.
python, c++, gcc, redhat, anaconda
4
850
0
https://stackoverflow.com/questions/37215811/fatal-error-limits-h-no-such-file-or-directory-theano-and-anaconda
36,189,727
RVM demands username in interactive prompt during Ruby install
I am attempting to use Ansible to install RVM to install an updated version of Ruby on a remote (RedHat 6.x) system. I have tried two separate Ansible-RVM playbooks ( rvm/rvm1-ansible and newmen/ansible-rvm ), but they both exhibit the same behavior: they both reach the step at which the playbook directs RVM to install Ruby, then stall until I cancel the process: TASK: [ansible-rvm | installing Ruby as root] ********************************* <HOST.DOMAIN.xyz> <HOST.DOMAIN.xyz> <HOST.DOMAIN.xyz> IdentityFile=/Users/USER/.ssh/private-key-file ConnectTimeout=10 PasswordAuthentication=no KbdInteractiveAuthentication=no ControlPath=/Users/USER/.ansible/cp/ansible-ssh-%h-%p-%r PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey ControlMaster=auto ControlPersist=60s <HOST.DOMAIN.xyz> <HOST.DOMAIN.xyz> IdentityFile=/Users/USER/.ssh/private-key-file ConnectTimeout=10 'sudo -k && sudo -H -S -p "[sudo via ansible, key=KEY] password: " -u root /bin/sh -c '"'"'echo SUDO-SUCCESS-KEY; LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /home/USER/.ansible/tmp/ansible-tmp-dir/command; rm -rf /home/USER/.ansible/tmp/ansible-tmp-dir/ >/dev/null 2>&1'"'"'' PasswordAuthentication=no KbdInteractiveAuthentication=no ControlPath=/Users/USER/.ansible/cp/ansible-ssh-%h-%p-%r PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey ControlMaster=auto ControlPersist=60s ^CERROR: interrupted It appears that the cause is that RVM is demanding some kind of login information. When I SSH into the host in question to run RVM manually, I get a prompt Username: : $ rvm install ruby-2.2.2 Searching for binary rubies, this might take some time. No binary rubies available for: redhat/6/x86_64/ruby-2.2.2. Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies. Checking requirements for redhat. Enabling optional repository Username: ^C User interrupted process. The above occurs regardless of whether or not the rvm command is run under sudo . I have been unable to find any documentation as to what login/username RVM is requesting, nor any instructions as to flags or configuration I could apply in order to disable interactivity; in fact, I've yet to find any reference to this login prompt in conjunction with RVM at all. Has anyone encountered this problem before?
RVM demands username in interactive prompt during Ruby install I am attempting to use Ansible to install RVM to install an updated version of Ruby on a remote (RedHat 6.x) system. I have tried two separate Ansible-RVM playbooks ( rvm/rvm1-ansible and newmen/ansible-rvm ), but they both exhibit the same behavior: they both reach the step at which the playbook directs RVM to install Ruby, then stall until I cancel the process: TASK: [ansible-rvm | installing Ruby as root] ********************************* <HOST.DOMAIN.xyz> <HOST.DOMAIN.xyz> <HOST.DOMAIN.xyz> IdentityFile=/Users/USER/.ssh/private-key-file ConnectTimeout=10 PasswordAuthentication=no KbdInteractiveAuthentication=no ControlPath=/Users/USER/.ansible/cp/ansible-ssh-%h-%p-%r PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey ControlMaster=auto ControlPersist=60s <HOST.DOMAIN.xyz> <HOST.DOMAIN.xyz> IdentityFile=/Users/USER/.ssh/private-key-file ConnectTimeout=10 'sudo -k && sudo -H -S -p "[sudo via ansible, key=KEY] password: " -u root /bin/sh -c '"'"'echo SUDO-SUCCESS-KEY; LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /home/USER/.ansible/tmp/ansible-tmp-dir/command; rm -rf /home/USER/.ansible/tmp/ansible-tmp-dir/ >/dev/null 2>&1'"'"'' PasswordAuthentication=no KbdInteractiveAuthentication=no ControlPath=/Users/USER/.ansible/cp/ansible-ssh-%h-%p-%r PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey ControlMaster=auto ControlPersist=60s ^CERROR: interrupted It appears that the cause is that RVM is demanding some kind of login information. When I SSH into the host in question to run RVM manually, I get a prompt Username: : $ rvm install ruby-2.2.2 Searching for binary rubies, this might take some time. No binary rubies available for: redhat/6/x86_64/ruby-2.2.2. Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies. Checking requirements for redhat. Enabling optional repository Username: ^C User interrupted process. The above occurs regardless of whether or not the rvm command is run under sudo . I have been unable to find any documentation as to what login/username RVM is requesting, nor any instructions as to flags or configuration I could apply in order to disable interactivity; in fact, I've yet to find any reference to this login prompt in conjunction with RVM at all. Has anyone encountered this problem before?
ruby, rvm, ansible, redhat
4
329
1
https://stackoverflow.com/questions/36189727/rvm-demands-username-in-interactive-prompt-during-ruby-install
33,950,825
webrtc2sip compilation error
When compiling webrtc2sip in Red Hat Enterprise Linux Server release 6.4 (Santiago) following errors coming. /usr/local/lib/libtinyNET.so: undefined reference to EC_KEY_free' /usr/local/lib/libtinyNET.so: undefined reference to SSL_export_keying_material' /usr/local/lib/libtinyNET.so: undefined reference to SSL_CTX_set_tlsext_use_srtp' /usr/local/lib/libtinyNET.so: undefined reference to SSL_get_selected_srtp_profile' /usr/local/lib/libtinyNET.so: undefined reference to EC_KEY_new_by_curve_name' collect2: ld returned 1 exit status make[1]: *** [webrtc2sip] Error 1 make[1]: Leaving directory /root/webrtc2sip' make: *** [all] Error 2 Note: Doubango configuration options: ./configure --with-ssl --with-srtp --with-vpx --with-speexdsp --with-ffmpeg --with-opus It compiled with out any issues I saw some post on same error ( [URL] ) where i found it's because of two versions of ssl. But i'm not getting the exact thing for my case. I've installed openssl by this steps: wget [URL] tar -xvzf openssl-1.0.1c.tar.gz cd openssl-1.0.1c ./config shared --prefix=/usr/local --openssldir=/usr/local/openssl && make && make install When i'm checking for openssl version i'm getting these informations: [root@cluster ~]# openssl version OpenSSL 1.0.1c 10 May 2012 [root@cluster ~]# rpm -qa | grep openssl openssl098e-0.9.8e-17.el6_2.2.x86_64 openssl-1.0.0-27.el6.x86_64 openssl-devel-1.0.0-27.el6.x86_64 Can any body guide me how to solve this compilation issue ??
webrtc2sip compilation error When compiling webrtc2sip in Red Hat Enterprise Linux Server release 6.4 (Santiago) following errors coming. /usr/local/lib/libtinyNET.so: undefined reference to EC_KEY_free' /usr/local/lib/libtinyNET.so: undefined reference to SSL_export_keying_material' /usr/local/lib/libtinyNET.so: undefined reference to SSL_CTX_set_tlsext_use_srtp' /usr/local/lib/libtinyNET.so: undefined reference to SSL_get_selected_srtp_profile' /usr/local/lib/libtinyNET.so: undefined reference to EC_KEY_new_by_curve_name' collect2: ld returned 1 exit status make[1]: *** [webrtc2sip] Error 1 make[1]: Leaving directory /root/webrtc2sip' make: *** [all] Error 2 Note: Doubango configuration options: ./configure --with-ssl --with-srtp --with-vpx --with-speexdsp --with-ffmpeg --with-opus It compiled with out any issues I saw some post on same error ( [URL] ) where i found it's because of two versions of ssl. But i'm not getting the exact thing for my case. I've installed openssl by this steps: wget [URL] tar -xvzf openssl-1.0.1c.tar.gz cd openssl-1.0.1c ./config shared --prefix=/usr/local --openssldir=/usr/local/openssl && make && make install When i'm checking for openssl version i'm getting these informations: [root@cluster ~]# openssl version OpenSSL 1.0.1c 10 May 2012 [root@cluster ~]# rpm -qa | grep openssl openssl098e-0.9.8e-17.el6_2.2.x86_64 openssl-1.0.0-27.el6.x86_64 openssl-devel-1.0.0-27.el6.x86_64 Can any body guide me how to solve this compilation issue ??
openssl, webrtc, redhat, rhel
4
525
0
https://stackoverflow.com/questions/33950825/webrtc2sip-compilation-error
25,518,484
Nginx + php fastcgi unable to open file, permission denied
I am having some permission issues with Nginx and Php fastcgi when trying to get to the php file. I am using 5.5.15 and Nginx 1.6.0 in Redhat 7 . My php file is very simple for now. <?php echo "\nscript owner : ".get_current_user()."\n"; $myFile = '/usr/share/nginx/html/test.log'; $fh = fopen($myFile, 'a') or die("can''t open file"); ?> Get current user will result in : "myuser" The error that I am getting is the following: 2014/08/26 22:47:14 [error] 6424#0: *16 FastCGI sent in stderr: "PHP message: PHP Warning: fopen(/usr/share/nginx/html/test.log): failed to open stream: Permission denied in /usr/share/nginx/html/test.php on line 19" while reading response header from upstream, client: XXXXXX, server: XXXXXXX, request: "GET /test.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "XXXXXXX" Here is the permissions for the directory /usr/share/nginx( all of the parent directories have x permissions): drwxrwsrwx. 4 myuser myuser 4096 Aug 26 22:32 html Running the following commands: $ ps aux | grep "nginx: worker process" myuser 6423 0.0 0.3 111228 3880 ? S 22:36 0:00 nginx: worker process myuser 6424 0.0 0.5 111228 5428 ? S 22:36 0:00 nginx: worker process myuser 6480 0.0 0.0 112640 980 pts/0 R+ 22:41 0:00 grep --color=auto nginx: worker process $ ps aux | grep "php" myuser 5930 0.0 0.1 128616 1860 pts/0 T 21:09 0:00 vi /etc/php-fpm.conf myuser 5931 0.0 0.2 128628 2052 pts/0 T 21:09 0:00 vi /etc/php.ini myuser 5933 0.0 0.1 128616 1864 pts/0 T 21:13 0:00 vi /etc/php-fpm.conf myuser 5934 0.0 0.1 128616 1860 pts/0 T 21:14 0:00 vi /etc/php-fpm.d/www.conf myuser 5935 0.0 0.1 128616 1864 pts/0 T 21:15 0:00 vi /etc/php-fpm.conf root 6313 0.0 2.4 544732 25208 ? Ss 22:25 0:00 php-fpm: master process (/etc/php-fpm.conf) myuser 6314 0.0 0.8 544732 8356 ? S 22:25 0:00 php-fpm: pool www myuser 6315 0.0 0.8 544732 8328 ? S 22:25 0:00 php-fpm: pool www myuser 6316 0.0 0.9 545076 9892 ? S 22:25 0:00 php-fpm: pool www myuser 6317 0.0 0.9 544860 9452 ? S 22:25 0:00 php-fpm: pool www myuser 6318 0.0 0.9 544860 9212 ? S 22:25 0:00 php-fpm: pool www myuser 6483 0.0 0.0 112640 976 pts/0 R+ 22:47 0:00 grep --color=auto php My Server looks like the following: server { listen 80; root /usr/share/nginx/html; # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } And in the nginx.config I am using the same user: "user ec2-user;" I have also changed the /etc/php-fpm.d/www.conf file to have the same user and group. user = myuser group = myuser So, both Nginx and PHP are running on the same user "myuser". All the directories up to where the log file and the php file are located(/usr/share/nginx/html) have x access and that user has 777 access to that html directory. Not sure what I am missing. I have been searching online for 2 days now but no luck.
Nginx + php fastcgi unable to open file, permission denied I am having some permission issues with Nginx and Php fastcgi when trying to get to the php file. I am using 5.5.15 and Nginx 1.6.0 in Redhat 7 . My php file is very simple for now. <?php echo "\nscript owner : ".get_current_user()."\n"; $myFile = '/usr/share/nginx/html/test.log'; $fh = fopen($myFile, 'a') or die("can''t open file"); ?> Get current user will result in : "myuser" The error that I am getting is the following: 2014/08/26 22:47:14 [error] 6424#0: *16 FastCGI sent in stderr: "PHP message: PHP Warning: fopen(/usr/share/nginx/html/test.log): failed to open stream: Permission denied in /usr/share/nginx/html/test.php on line 19" while reading response header from upstream, client: XXXXXX, server: XXXXXXX, request: "GET /test.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "XXXXXXX" Here is the permissions for the directory /usr/share/nginx( all of the parent directories have x permissions): drwxrwsrwx. 4 myuser myuser 4096 Aug 26 22:32 html Running the following commands: $ ps aux | grep "nginx: worker process" myuser 6423 0.0 0.3 111228 3880 ? S 22:36 0:00 nginx: worker process myuser 6424 0.0 0.5 111228 5428 ? S 22:36 0:00 nginx: worker process myuser 6480 0.0 0.0 112640 980 pts/0 R+ 22:41 0:00 grep --color=auto nginx: worker process $ ps aux | grep "php" myuser 5930 0.0 0.1 128616 1860 pts/0 T 21:09 0:00 vi /etc/php-fpm.conf myuser 5931 0.0 0.2 128628 2052 pts/0 T 21:09 0:00 vi /etc/php.ini myuser 5933 0.0 0.1 128616 1864 pts/0 T 21:13 0:00 vi /etc/php-fpm.conf myuser 5934 0.0 0.1 128616 1860 pts/0 T 21:14 0:00 vi /etc/php-fpm.d/www.conf myuser 5935 0.0 0.1 128616 1864 pts/0 T 21:15 0:00 vi /etc/php-fpm.conf root 6313 0.0 2.4 544732 25208 ? Ss 22:25 0:00 php-fpm: master process (/etc/php-fpm.conf) myuser 6314 0.0 0.8 544732 8356 ? S 22:25 0:00 php-fpm: pool www myuser 6315 0.0 0.8 544732 8328 ? S 22:25 0:00 php-fpm: pool www myuser 6316 0.0 0.9 545076 9892 ? S 22:25 0:00 php-fpm: pool www myuser 6317 0.0 0.9 544860 9452 ? S 22:25 0:00 php-fpm: pool www myuser 6318 0.0 0.9 544860 9212 ? S 22:25 0:00 php-fpm: pool www myuser 6483 0.0 0.0 112640 976 pts/0 R+ 22:47 0:00 grep --color=auto php My Server looks like the following: server { listen 80; root /usr/share/nginx/html; # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } And in the nginx.config I am using the same user: "user ec2-user;" I have also changed the /etc/php-fpm.d/www.conf file to have the same user and group. user = myuser group = myuser So, both Nginx and PHP are running on the same user "myuser". All the directories up to where the log file and the php file are located(/usr/share/nginx/html) have x access and that user has 777 access to that html directory. Not sure what I am missing. I have been searching online for 2 days now but no luck.
php, nginx, permissions, redhat, fastcgi
4
4,714
1
https://stackoverflow.com/questions/25518484/nginx-php-fastcgi-unable-to-open-file-permission-denied
25,433,540
New ActiveMQ Installation Runs Out Of Memory After 30 Minutes
We have a fresh installation of ActiveMQ 5.9.1 running on Red Hat Linux. With no outside connections, no queues, and only the default topic on the system, the process runs out of memory (1GB allocated at startup with "-Xms1G -Xmx1G") after about 30 minutes, even with absolutely no activity. I initially ran into this problem with version 5.10.0, and downgraded to 5.9.1 to see if maybe it was something introduced in the new build. Literally, all I did was: tar xzf apache-activemq-5.9.1-bin.tar.gz mv apache-activemq-5.9.1-bin activemq cd activemq bin/activemq start Using "top", I noted that it started with about 150MB of real memory used, and it continued to creep upward. Once top showed it with 1.1GB, there were several heapdump, core, javacore and trace files in the base directory. The javacore files all state: Dump Event "systhrow" (00040000) Detail "java/lang/OutOfMemoryError" "Java heap space" received Has anyone else encountered this? How did you fix it? UPDATE 2014-08-22 "java -version" yeilds: java version "1.7.0" Java(TM) SE Runtime Environment (build pxa6470sr6fp1-20140108_01(SR6 FP1)) IBM J9 VM (build 2.6, JRE 1.7.0 Linux amd64-64 Compressed References 20140106_181350 (JIT enabled, AOT enabled) J9VM - R26_Java726_SR6_20140106_1601_B181350 JIT - r11.b05_20131003_47443.02 GC - R26_Java726_SR6_20140106_1601_B181350_CMPRSS J9CL - 20140106_181350) JCL - 20140103_01 based on Oracle 7u51-b11 I'm starting to think the IBM JVM may be the problem. EDIT 2014-08-29 Replaced the IBM JVM with the standard Oracle JVM and updated ActiveMQ to 5.10.0, but still have the problem. No connections to the server, one queue with no messages on it. Using the Eclipse Memory Analyzer, the Leak Suspects Report shows 197 instances of org.apache.activemq.broker.jmx.ManagedTransportConnection consuming approx. 500MB of memory out of 529MB total. Not sure what this means or how to fix it.
New ActiveMQ Installation Runs Out Of Memory After 30 Minutes We have a fresh installation of ActiveMQ 5.9.1 running on Red Hat Linux. With no outside connections, no queues, and only the default topic on the system, the process runs out of memory (1GB allocated at startup with "-Xms1G -Xmx1G") after about 30 minutes, even with absolutely no activity. I initially ran into this problem with version 5.10.0, and downgraded to 5.9.1 to see if maybe it was something introduced in the new build. Literally, all I did was: tar xzf apache-activemq-5.9.1-bin.tar.gz mv apache-activemq-5.9.1-bin activemq cd activemq bin/activemq start Using "top", I noted that it started with about 150MB of real memory used, and it continued to creep upward. Once top showed it with 1.1GB, there were several heapdump, core, javacore and trace files in the base directory. The javacore files all state: Dump Event "systhrow" (00040000) Detail "java/lang/OutOfMemoryError" "Java heap space" received Has anyone else encountered this? How did you fix it? UPDATE 2014-08-22 "java -version" yeilds: java version "1.7.0" Java(TM) SE Runtime Environment (build pxa6470sr6fp1-20140108_01(SR6 FP1)) IBM J9 VM (build 2.6, JRE 1.7.0 Linux amd64-64 Compressed References 20140106_181350 (JIT enabled, AOT enabled) J9VM - R26_Java726_SR6_20140106_1601_B181350 JIT - r11.b05_20131003_47443.02 GC - R26_Java726_SR6_20140106_1601_B181350_CMPRSS J9CL - 20140106_181350) JCL - 20140103_01 based on Oracle 7u51-b11 I'm starting to think the IBM JVM may be the problem. EDIT 2014-08-29 Replaced the IBM JVM with the standard Oracle JVM and updated ActiveMQ to 5.10.0, but still have the problem. No connections to the server, one queue with no messages on it. Using the Eclipse Memory Analyzer, the Leak Suspects Report shows 197 instances of org.apache.activemq.broker.jmx.ManagedTransportConnection consuming approx. 500MB of memory out of 529MB total. Not sure what this means or how to fix it.
jms, activemq-classic, redhat
4
1,096
0
https://stackoverflow.com/questions/25433540/new-activemq-installation-runs-out-of-memory-after-30-minutes
24,782,237
RPM Spec file - how to get rpm package location in %pre script
I am working with RPM package manager for about a month now. Currently I want to use rpm -U to upgrade already existing content from previous RPM execution but I need to know the rpm package location on the file system. The only way I can think of is searching whole file system for rpm name in %pre script but I would really like to avoid that option. Is there any way to get the path of the rpm package (package can be anywhere on the system) as a variable inside the spec file (%pre and %post script). Hope I explained my issue clearly enough. Any help or proposal is welcome.
RPM Spec file - how to get rpm package location in %pre script I am working with RPM package manager for about a month now. Currently I want to use rpm -U to upgrade already existing content from previous RPM execution but I need to know the rpm package location on the file system. The only way I can think of is searching whole file system for rpm name in %pre script but I would really like to avoid that option. Is there any way to get the path of the rpm package (package can be anywhere on the system) as a variable inside the spec file (%pre and %post script). Hope I explained my issue clearly enough. Any help or proposal is welcome.
linux, redhat, rpm, rpmbuild, rpm-spec
4
951
1
https://stackoverflow.com/questions/24782237/rpm-spec-file-how-to-get-rpm-package-location-in-pre-script
18,731,004
Issue installing Xdeug: undefined symbol: sapi_globals
I'm having an issue installing Xdebug on a Redhat server with Php 5.4.14 installed. This server does not have ports open to access the internet, so I have build Xdebug from the source following the guide provided on the install page and then by the wizard. After adding the zend_extension="my/path/to/xdebug.so" and restarting Apache, I am greeted with the following message in the Apache system error logs: /modules/php/lib/php/extensions/no-debug-zts-20100525/xdebug.so: /apps/Apache/httpd-2.4.4/modules/php/lib/php/extensions/no-debug-zts-20100525/xdebug.so: undefined symbol: sapi_globals After googling this there isn't a clear way to fix this, so I came here :). Any help would be greatly appreciated.
Issue installing Xdeug: undefined symbol: sapi_globals I'm having an issue installing Xdebug on a Redhat server with Php 5.4.14 installed. This server does not have ports open to access the internet, so I have build Xdebug from the source following the guide provided on the install page and then by the wizard. After adding the zend_extension="my/path/to/xdebug.so" and restarting Apache, I am greeted with the following message in the Apache system error logs: /modules/php/lib/php/extensions/no-debug-zts-20100525/xdebug.so: /apps/Apache/httpd-2.4.4/modules/php/lib/php/extensions/no-debug-zts-20100525/xdebug.so: undefined symbol: sapi_globals After googling this there isn't a clear way to fix this, so I came here :). Any help would be greatly appreciated.
php, apache, xdebug, redhat
4
726
0
https://stackoverflow.com/questions/18731004/issue-installing-xdeug-undefined-symbol-sapi-globals