question_id
int64 82.3k
79.7M
| title_clean
stringlengths 15
158
| body_clean
stringlengths 62
28.5k
| full_text
stringlengths 95
28.5k
| tags
stringlengths 4
80
| score
int64 0
1.15k
| view_count
int64 22
1.62M
| answer_count
int64 0
30
| link
stringlengths 58
125
|
|---|---|---|---|---|---|---|---|---|
51,373,369
|
How to add custom JNDI resources into wildfly-10 similar like <custom-resource> of glassfish server?
|
This is the code which is use to add custom resources in glassfish server but my requirement is to achieve this in wildfly-10 server but i don't know how to do it,so please help me with this <custom-resource factory-class="org.glassfish.resources.custom.factory.PropertiesFactory" res- type="java.util.Properties" jndi-name="docdokuplm.config"> <property name="codebase" value="[URL] <property name="vaultPath" value="/var/lib/docdoku"></property> </custom-resource> <custom-resource factory-class="org.glassfish.resources.custom.factory.PropertiesFactory" res- type="java.util.Properties" jndi-name="auth.config"> <property name="basic.header.enabled" value="true"></property> <property name="session.enabled" value="true"></property> <property name="jwt.key" value="singh20111995"></property> <property name="jwt.enabled" value="true"></property> </custom-resource>
|
How to add custom JNDI resources into wildfly-10 similar like <custom-resource> of glassfish server? This is the code which is use to add custom resources in glassfish server but my requirement is to achieve this in wildfly-10 server but i don't know how to do it,so please help me with this <custom-resource factory-class="org.glassfish.resources.custom.factory.PropertiesFactory" res- type="java.util.Properties" jndi-name="docdokuplm.config"> <property name="codebase" value="[URL] <property name="vaultPath" value="/var/lib/docdoku"></property> </custom-resource> <custom-resource factory-class="org.glassfish.resources.custom.factory.PropertiesFactory" res- type="java.util.Properties" jndi-name="auth.config"> <property name="basic.header.enabled" value="true"></property> <property name="session.enabled" value="true"></property> <property name="jwt.key" value="singh20111995"></property> <property name="jwt.enabled" value="true"></property> </custom-resource>
|
server, jboss, glassfish, redhat, wildfly-10
| 2
| 731
| 1
|
https://stackoverflow.com/questions/51373369/how-to-add-custom-jndi-resources-into-wildfly-10-similar-like-custom-resource
|
51,368,484
|
Kubelet service not starting up - KUBELET_EXTRA_ARGS (code=exited, status=255)
|
I am trying to start up the kubelet service on a worker node (the 3rd worker node)... at the moment, I can't quite tell what the error is here.. I do however, see F0716 16:42:20.047413 556 server.go:155] unknown command: $KUBELET_EXTRA_ARGS in the output given by sudo systemctl status kubelet -l : [svc.jenkins@node6 ~]$ sudo systemctl status kubelet -l ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since Mon 2018-07-16 16:42:20 CDT; 4s ago Docs: [URL] Process: 556 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255) Main PID: 556 (code=exited, status=255) Jul 16 16:42:20 node6 kubelet[556]: --tls-cert-file string File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See [URL] for more information.) Jul 16 16:42:20 node6 kubelet[556]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See [URL] for more information.) Jul 16 16:42:20 node6 kubelet[556]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See [URL] for more information.) Jul 16 16:42:20 node6 kubelet[556]: --tls-private-key-file string File containing x509 private key matching --tls-cert-file. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See [URL] for more information.) Jul 16 16:42:20 node6 kubelet[556]: -v, --v Level log level for V logs Jul 16 16:42:20 node6 kubelet[556]: --version version[=true] Print version information and quit Jul 16 16:42:20 node6 kubelet[556]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging Jul 16 16:42:20 node6 kubelet[556]: --volume-plugin-dir string The full path of the directory in which to search for additional third party volume plugins (default "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/") Jul 16 16:42:20 node6 kubelet[556]: --volume-stats-agg-period duration Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes. To disable volume calculations, set to 0. (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See [URL] for more information.) Jul 16 16:42:20 node6 kubelet[556]: F0716 16:42:20.047413 556 server.go:155] unknown command: $KUBELET_EXTRA_ARGS Here is the configuration for my dropin loacated at /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (it is the same on the other nodes that are in a working state): [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true" Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local" Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt" Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0" Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/data01/kubelet/pki" Environment="KUBELET_EXTRA_ARGS=$KUBELET_EXTRA_ARGS --root-dir=/data01/kubelet" ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS Just need help diagnosing what the issue preventing it from starting so that it can be resolved.. Thank in advanced :) EDIT: [svc.jenkins@node6 ~]$ kubelet --version Kubernetes v1.10.4
|
Kubelet service not starting up - KUBELET_EXTRA_ARGS (code=exited, status=255) I am trying to start up the kubelet service on a worker node (the 3rd worker node)... at the moment, I can't quite tell what the error is here.. I do however, see F0716 16:42:20.047413 556 server.go:155] unknown command: $KUBELET_EXTRA_ARGS in the output given by sudo systemctl status kubelet -l : [svc.jenkins@node6 ~]$ sudo systemctl status kubelet -l ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since Mon 2018-07-16 16:42:20 CDT; 4s ago Docs: [URL] Process: 556 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255) Main PID: 556 (code=exited, status=255) Jul 16 16:42:20 node6 kubelet[556]: --tls-cert-file string File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See [URL] for more information.) Jul 16 16:42:20 node6 kubelet[556]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See [URL] for more information.) Jul 16 16:42:20 node6 kubelet[556]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See [URL] for more information.) Jul 16 16:42:20 node6 kubelet[556]: --tls-private-key-file string File containing x509 private key matching --tls-cert-file. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See [URL] for more information.) Jul 16 16:42:20 node6 kubelet[556]: -v, --v Level log level for V logs Jul 16 16:42:20 node6 kubelet[556]: --version version[=true] Print version information and quit Jul 16 16:42:20 node6 kubelet[556]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging Jul 16 16:42:20 node6 kubelet[556]: --volume-plugin-dir string The full path of the directory in which to search for additional third party volume plugins (default "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/") Jul 16 16:42:20 node6 kubelet[556]: --volume-stats-agg-period duration Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes. To disable volume calculations, set to 0. (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See [URL] for more information.) Jul 16 16:42:20 node6 kubelet[556]: F0716 16:42:20.047413 556 server.go:155] unknown command: $KUBELET_EXTRA_ARGS Here is the configuration for my dropin loacated at /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (it is the same on the other nodes that are in a working state): [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true" Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local" Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt" Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0" Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/data01/kubelet/pki" Environment="KUBELET_EXTRA_ARGS=$KUBELET_EXTRA_ARGS --root-dir=/data01/kubelet" ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS Just need help diagnosing what the issue preventing it from starting so that it can be resolved.. Thank in advanced :) EDIT: [svc.jenkins@node6 ~]$ kubelet --version Kubernetes v1.10.4
|
kubernetes, redhat, kubelet
| 2
| 20,004
| 1
|
https://stackoverflow.com/questions/51368484/kubelet-service-not-starting-up-kubelet-extra-args-code-exited-status-255
|
51,084,379
|
Allow operations for daemon with custom context on different files with different context in SELinux
|
Special type of allow rule I have created running daemon from executable file with custom context, something like: system_u:system_r:daemon_name_t It will traverse through entire directory recursively and read (not open) these unknown files (this files can have any context, not only from its domain), so i would like to write type enforcement rule with scontext daemon_name_t and ANY target context. While writing type enforcement rule I would like it to stay as restrictive as possible. I don't want to give it context unconfined_t . For example if I needed to allow operations getattr and read I would like get this effect: allow daemon_name_t { * } :file { getattr read }; I can't find any possible way to do this with SELinux . Is this even possible? Any help is appreciated. EDIT: i have found out that there is a way to enforce allow rule on file_type like this: allow daemon_name_t file_type:{type1 type2} {getattr read}; It is sufficient for me for now, but it would be good to know if there is better solution.
|
Allow operations for daemon with custom context on different files with different context in SELinux Special type of allow rule I have created running daemon from executable file with custom context, something like: system_u:system_r:daemon_name_t It will traverse through entire directory recursively and read (not open) these unknown files (this files can have any context, not only from its domain), so i would like to write type enforcement rule with scontext daemon_name_t and ANY target context. While writing type enforcement rule I would like it to stay as restrictive as possible. I don't want to give it context unconfined_t . For example if I needed to allow operations getattr and read I would like get this effect: allow daemon_name_t { * } :file { getattr read }; I can't find any possible way to do this with SELinux . Is this even possible? Any help is appreciated. EDIT: i have found out that there is a way to enforce allow rule on file_type like this: allow daemon_name_t file_type:{type1 type2} {getattr read}; It is sufficient for me for now, but it would be good to know if there is better solution.
|
linux, redhat, fedora, selinux
| 2
| 217
| 1
|
https://stackoverflow.com/questions/51084379/allow-operations-for-daemon-with-custom-context-on-different-files-with-differen
|
49,565,158
|
RedHat 7 Error => Requires : libcrypto.so.10
|
I have a problem when I want to install php 5.6. I removed all php stuff with "yum remove php*". I use Linux RedHat 7 with Repo Remi enabled. I am using OPENSSL_1.0.2 and a 64 bit OS. [root@localhost ~]# yum install php56 Modules complémentaires chargés : langpacks, product-id, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Résolution des dépendances --> Lancement de la transaction de test ---> Le paquet php56.x86_64 0:2.3-1.el7.remi sera installé --> Traitement de la dépendance : php56-runtime(x86-64) = 2.3-1.el7.remi pour le paquet : php56-2.3-1.el7.remi.x86_64 --> Traitement de la dépendance : php56-php-pear >= 1:1.10.5 pour le paquet : php56-2.3-1.el7.remi.x86_64 --> Traitement de la dépendance : php56-php-common(x86-64) >= 5.6.31 pour le paquet : php56-2.3-1.el7.remi.x86_64 --> Traitement de la dépendance : php56-runtime pour le paquet : php56-2.3-1.el7.remi.x86_64 --> Traitement de la dépendance : php56-php-cli(x86-64) pour le paquet : php56-2.3-1.el7.remi.x86_64 --> Lancement de la transaction de test ---> Le paquet php56-php-cli.x86_64 0:5.6.35-1.el7.remi sera installé --> Traitement de la dépendance : libcrypto.so.10(OPENSSL_1.0.2)(64bit) pour le paquet : php56-php-cli-5.6.35-1.el7.remi.x86_64 ---> Le paquet php56-php-common.x86_64 0:5.6.35-1.el7.remi sera installé --> Traitement de la dépendance : php56-php-pecl-zip(x86-64) pour le paquet : php56-php-common-5.6.35-1.el7.remi.x86_64 --> Traitement de la dépendance : php56-php-pecl-jsonc(x86-64) pour le paquet : php56-php-common-5.6.35-1.el7.remi.x86_64 ---> Le paquet php56-php-pear.noarch 1:1.10.5-5.el7.remi sera installé --> Traitement de la dépendance : php56-php-xml pour le paquet : 1:php56-php-pear-1.10.5-5.el7.remi.noarch --> Traitement de la dépendance : php56-php-posix pour le paquet : 1:php56-php-pear-1.10.5-5.el7.remi.noarch ---> Le paquet php56-runtime.x86_64 0:2.3-1.el7.remi sera installé --> Traitement de la dépendance : environment-modules pour le paquet : php56-runtime-2.3-1.el7.remi.x86_64 --> Traitement de la dépendance : /usr/sbin/semanage pour le paquet : php56-runtime-2.3-1.el7.remi.x86_64 --> Lancement de la transaction de test ---> Le paquet environment-modules.x86_64 0:3.2.10-0.el7.remi sera installé --> Traitement de la dépendance : libtcl8.5.so()(64bit) pour le paquet : environment-modules-3.2.10-0.el7.remi.x86_64 ---> Le paquet php56-php-cli.x86_64 0:5.6.35-1.el7.remi sera installé --> Traitement de la dépendance : libcrypto.so.10(OPENSSL_1.0.2)(64bit) pour le paquet : php56-php-cli-5.6.35-1.el7.remi.x86_64 ---> Le paquet php56-php-pecl-jsonc.x86_64 0:1.3.10-1.el7.remi sera installé ---> Le paquet php56-php-pecl-zip.x86_64 0:1.15.2-1.el7.remi sera installé ---> Le paquet php56-php-process.x86_64 0:5.6.35-1.el7.remi sera installé ---> Le paquet php56-php-xml.x86_64 0:5.6.35-1.el7.remi sera installé ---> Le paquet php56-runtime.x86_64 0:2.3-1.el7.remi sera installé --> Traitement de la dépendance : /usr/sbin/semanage pour le paquet : php56-runtime-2.3-1.el7.remi.x86_64 --> Traitement de la dépendance : /usr/sbin/semanage pour le paquet : php56-runtime-2.3-1.el7.remi.x86_64 --> Résolution des dépendances terminée Erreur : Paquet : php56-runtime-2.3-1.el7.remi.x86_64 (remi) Requiert : /usr/sbin/semanage Erreur : Paquet : php56-php-cli-5.6.35-1.el7.remi.x86_64 (remi) Requiert : libcrypto.so.10(OPENSSL_1.0.2)(64bit) Erreur : Paquet : environment-modules-3.2.10-0.el7.remi.x86_64 (remi) Requiert : libtcl8.5.so()(64bit) Vous pouvez essayer d'utiliser --skip-broken pour contourner le problème Vous pouvez essayer d'exécuter : rpm -Va --nofiles --nodigest
|
RedHat 7 Error => Requires : libcrypto.so.10 I have a problem when I want to install php 5.6. I removed all php stuff with "yum remove php*". I use Linux RedHat 7 with Repo Remi enabled. I am using OPENSSL_1.0.2 and a 64 bit OS. [root@localhost ~]# yum install php56 Modules complémentaires chargés : langpacks, product-id, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Résolution des dépendances --> Lancement de la transaction de test ---> Le paquet php56.x86_64 0:2.3-1.el7.remi sera installé --> Traitement de la dépendance : php56-runtime(x86-64) = 2.3-1.el7.remi pour le paquet : php56-2.3-1.el7.remi.x86_64 --> Traitement de la dépendance : php56-php-pear >= 1:1.10.5 pour le paquet : php56-2.3-1.el7.remi.x86_64 --> Traitement de la dépendance : php56-php-common(x86-64) >= 5.6.31 pour le paquet : php56-2.3-1.el7.remi.x86_64 --> Traitement de la dépendance : php56-runtime pour le paquet : php56-2.3-1.el7.remi.x86_64 --> Traitement de la dépendance : php56-php-cli(x86-64) pour le paquet : php56-2.3-1.el7.remi.x86_64 --> Lancement de la transaction de test ---> Le paquet php56-php-cli.x86_64 0:5.6.35-1.el7.remi sera installé --> Traitement de la dépendance : libcrypto.so.10(OPENSSL_1.0.2)(64bit) pour le paquet : php56-php-cli-5.6.35-1.el7.remi.x86_64 ---> Le paquet php56-php-common.x86_64 0:5.6.35-1.el7.remi sera installé --> Traitement de la dépendance : php56-php-pecl-zip(x86-64) pour le paquet : php56-php-common-5.6.35-1.el7.remi.x86_64 --> Traitement de la dépendance : php56-php-pecl-jsonc(x86-64) pour le paquet : php56-php-common-5.6.35-1.el7.remi.x86_64 ---> Le paquet php56-php-pear.noarch 1:1.10.5-5.el7.remi sera installé --> Traitement de la dépendance : php56-php-xml pour le paquet : 1:php56-php-pear-1.10.5-5.el7.remi.noarch --> Traitement de la dépendance : php56-php-posix pour le paquet : 1:php56-php-pear-1.10.5-5.el7.remi.noarch ---> Le paquet php56-runtime.x86_64 0:2.3-1.el7.remi sera installé --> Traitement de la dépendance : environment-modules pour le paquet : php56-runtime-2.3-1.el7.remi.x86_64 --> Traitement de la dépendance : /usr/sbin/semanage pour le paquet : php56-runtime-2.3-1.el7.remi.x86_64 --> Lancement de la transaction de test ---> Le paquet environment-modules.x86_64 0:3.2.10-0.el7.remi sera installé --> Traitement de la dépendance : libtcl8.5.so()(64bit) pour le paquet : environment-modules-3.2.10-0.el7.remi.x86_64 ---> Le paquet php56-php-cli.x86_64 0:5.6.35-1.el7.remi sera installé --> Traitement de la dépendance : libcrypto.so.10(OPENSSL_1.0.2)(64bit) pour le paquet : php56-php-cli-5.6.35-1.el7.remi.x86_64 ---> Le paquet php56-php-pecl-jsonc.x86_64 0:1.3.10-1.el7.remi sera installé ---> Le paquet php56-php-pecl-zip.x86_64 0:1.15.2-1.el7.remi sera installé ---> Le paquet php56-php-process.x86_64 0:5.6.35-1.el7.remi sera installé ---> Le paquet php56-php-xml.x86_64 0:5.6.35-1.el7.remi sera installé ---> Le paquet php56-runtime.x86_64 0:2.3-1.el7.remi sera installé --> Traitement de la dépendance : /usr/sbin/semanage pour le paquet : php56-runtime-2.3-1.el7.remi.x86_64 --> Traitement de la dépendance : /usr/sbin/semanage pour le paquet : php56-runtime-2.3-1.el7.remi.x86_64 --> Résolution des dépendances terminée Erreur : Paquet : php56-runtime-2.3-1.el7.remi.x86_64 (remi) Requiert : /usr/sbin/semanage Erreur : Paquet : php56-php-cli-5.6.35-1.el7.remi.x86_64 (remi) Requiert : libcrypto.so.10(OPENSSL_1.0.2)(64bit) Erreur : Paquet : environment-modules-3.2.10-0.el7.remi.x86_64 (remi) Requiert : libtcl8.5.so()(64bit) Vous pouvez essayer d'utiliser --skip-broken pour contourner le problème Vous pouvez essayer d'exécuter : rpm -Va --nofiles --nodigest
|
php, linux, redhat, centos7, php-5.6
| 2
| 3,564
| 1
|
https://stackoverflow.com/questions/49565158/redhat-7-error-requires-libcrypto-so-10
|
48,889,724
|
Performance difference in Docker images
|
I have a .NET Core 2.0 Console App having different performance results depending on the Docker base image it is running on. The application performs several calls to the String.StartsWith(string) function in .NET. Here is the Program.cs file: using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; namespace ConsoleApp { class Program { private static string variable = "MYTEXTSTRING"; private static IEnumerable<string> collection = new[] { "FAF", "LEP", "GHI" }; static void Main(string[] args) { int counter = args.Length > 0 ? Int32.Parse(args[0]) : 1000; var stopwatch = new Stopwatch(); stopwatch.Start(); for (int i = 0; i < counter; i++) { foreach (string str in collection) { variable.StartsWith(str); } } stopwatch.Stop(); Console.WriteLine($"Elapsed time: '{stopwatch.ElapsedMilliseconds}ms' - {counter * collection.Count()} calls to string.StartsWith()"); Console.ReadLine(); } } } This code then runs in a Docker container in a Linux Ubuntu VM. Depending on the base image I use, I see very different performance results. Here is the Docker file using a Red Hat base image: # Red Hat base image FROM registry.access.redhat.com/dotnet/dotnet-20-rhel7 # set the working directory WORKDIR /app # copy files ADD . /app # run Model.Build CMD ["dotnet", "ConsoleApp.dll", "20000"] Here is the Docker file using a Linux Debian base image: # Docker Hub base image FROM microsoft/dotnet:2.0.5-runtime-jessie # set the working directory WORKDIR /app # copy files ADD . /app # run Model.Build CMD ["dotnet", "ConsoleApp.dll", "20000"] As you can see, apart from the base image, the two Dockerfiles are actually identical. Here are the performance results I get: Red Hat base image: "Elapsed time: '540ms' - 60000 calls to string.StartsWith()". Docker Hub base image: "Elapsed time: '15ms' - 60000 calls to string.StartsWith()". Native execution: "Elapsed time: '14ms' - 60000 calls to string.StartsWith()" So, while the container using the Debian base image has performance results very similar to native execution, the container using the Red Hat image is performing a lot slower. Question: why does the StartWith() function perform so differently? What is causing performance to drop this much when using the Red Hat base image? Thanks.
|
Performance difference in Docker images I have a .NET Core 2.0 Console App having different performance results depending on the Docker base image it is running on. The application performs several calls to the String.StartsWith(string) function in .NET. Here is the Program.cs file: using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; namespace ConsoleApp { class Program { private static string variable = "MYTEXTSTRING"; private static IEnumerable<string> collection = new[] { "FAF", "LEP", "GHI" }; static void Main(string[] args) { int counter = args.Length > 0 ? Int32.Parse(args[0]) : 1000; var stopwatch = new Stopwatch(); stopwatch.Start(); for (int i = 0; i < counter; i++) { foreach (string str in collection) { variable.StartsWith(str); } } stopwatch.Stop(); Console.WriteLine($"Elapsed time: '{stopwatch.ElapsedMilliseconds}ms' - {counter * collection.Count()} calls to string.StartsWith()"); Console.ReadLine(); } } } This code then runs in a Docker container in a Linux Ubuntu VM. Depending on the base image I use, I see very different performance results. Here is the Docker file using a Red Hat base image: # Red Hat base image FROM registry.access.redhat.com/dotnet/dotnet-20-rhel7 # set the working directory WORKDIR /app # copy files ADD . /app # run Model.Build CMD ["dotnet", "ConsoleApp.dll", "20000"] Here is the Docker file using a Linux Debian base image: # Docker Hub base image FROM microsoft/dotnet:2.0.5-runtime-jessie # set the working directory WORKDIR /app # copy files ADD . /app # run Model.Build CMD ["dotnet", "ConsoleApp.dll", "20000"] As you can see, apart from the base image, the two Dockerfiles are actually identical. Here are the performance results I get: Red Hat base image: "Elapsed time: '540ms' - 60000 calls to string.StartsWith()". Docker Hub base image: "Elapsed time: '15ms' - 60000 calls to string.StartsWith()". Native execution: "Elapsed time: '14ms' - 60000 calls to string.StartsWith()" So, while the container using the Debian base image has performance results very similar to native execution, the container using the Red Hat image is performing a lot slower. Question: why does the StartWith() function perform so differently? What is causing performance to drop this much when using the Red Hat base image? Thanks.
|
string, docker, .net-core, redhat, startswith
| 2
| 607
| 1
|
https://stackoverflow.com/questions/48889724/performance-difference-in-docker-images
|
48,122,908
|
How to have oracle imp 11gr2 and 12cr2 on the same machine and just choose the one that I want to use
|
I'm currently developing an application to import oracle dbs. In order to do that I'm using Data Pump and the original imp client (version 12.2.0.1 ). However I cannot use that imp client against an 11gr2 database, I need to use the 11gr2 imp client. I already have the client and libraries that I got from one of my 11gr2 DBs however, if I try to execute it I'm getting the following error: Message 100 not found; No message file for product=RDBMS, facility=IMP: Release 11.2.0.3.0 - Production on Fri Jan 5 18:28:21 2018 Copyright (c) 1982, 2011, Oracl Invalid format of Import utility name Verify that ORACLE_HOME is properly set Import terminated unsuccessfully IMP-00000: Message 0 not found; No message file for product=RDBMS, facility=IMP Can someone point how to have both clients working on the same machine? Thanks in advance. [UPDATE] I'm using Red Hat OS and this is the output of $ORACLE_HOME: /root/oracle/instantclient_12_2 I tried using the full path and placing the files in ORACLE_HOME but I still get the same error. Thanks!!!
|
How to have oracle imp 11gr2 and 12cr2 on the same machine and just choose the one that I want to use I'm currently developing an application to import oracle dbs. In order to do that I'm using Data Pump and the original imp client (version 12.2.0.1 ). However I cannot use that imp client against an 11gr2 database, I need to use the 11gr2 imp client. I already have the client and libraries that I got from one of my 11gr2 DBs however, if I try to execute it I'm getting the following error: Message 100 not found; No message file for product=RDBMS, facility=IMP: Release 11.2.0.3.0 - Production on Fri Jan 5 18:28:21 2018 Copyright (c) 1982, 2011, Oracl Invalid format of Import utility name Verify that ORACLE_HOME is properly set Import terminated unsuccessfully IMP-00000: Message 0 not found; No message file for product=RDBMS, facility=IMP Can someone point how to have both clients working on the same machine? Thanks in advance. [UPDATE] I'm using Red Hat OS and this is the output of $ORACLE_HOME: /root/oracle/instantclient_12_2 I tried using the full path and placing the files in ORACLE_HOME but I still get the same error. Thanks!!!
|
oracle-database, import, redhat, impdp, imp
| 2
| 1,877
| 3
|
https://stackoverflow.com/questions/48122908/how-to-have-oracle-imp-11gr2-and-12cr2-on-the-same-machine-and-just-choose-the-o
|
47,594,704
|
gpg protection algorithm is not supported
|
I have files that are encrypted with gpg. I've created a new server and I exported\imported the public and private key to my new server. The files are now encrypted on the new server. When I try to decrypt a file on the new server I get the following error: gpg: protection algorithm 3 is not supported gpg: encrypted with 4096-bit ELG key, ID 15BBEC7A, created 2012-11-21 "test test (Logs) <test.test@test.ca>" gpg: public key decryption failed: Invalid cipher algorithm gpg: decryption failed: No secret key If I copy the file to my old server I'm still able to decrypt it. I can't find the problem. My first guess is that the cipher used originaly is CATS5 and that it's no longuer supported.
|
gpg protection algorithm is not supported I have files that are encrypted with gpg. I've created a new server and I exported\imported the public and private key to my new server. The files are now encrypted on the new server. When I try to decrypt a file on the new server I get the following error: gpg: protection algorithm 3 is not supported gpg: encrypted with 4096-bit ELG key, ID 15BBEC7A, created 2012-11-21 "test test (Logs) <test.test@test.ca>" gpg: public key decryption failed: Invalid cipher algorithm gpg: decryption failed: No secret key If I copy the file to my old server I'm still able to decrypt it. I can't find the problem. My first guess is that the cipher used originaly is CATS5 and that it's no longuer supported.
|
encryption, redhat, public-key-encryption, gnupg
| 2
| 1,888
| 1
|
https://stackoverflow.com/questions/47594704/gpg-protection-algorithm-is-not-supported
|
47,176,545
|
How to set user password using SHA-512 hash with Puppet?
|
I want to set user password with Puppet. The code: if ($operatingsystemmajrelease == '7') { group { 'zabbix': name => "zabbix", ensure => "present", } user { 'zabbix': name => "zabbix", groups => "zabbix", password => "$6$UdvUfiKs$rb4XFkCn2h/AUZrJsg2wnRDkOH5E5lliJZXqySVEYUDARFSlWKYHOeMLWycTa2jIMa3XQ3MWtq1EiilBZCbKX.", } } produces an error: Error: Could not retrieve catalog from remote server: Error 500 on SERVER: {"message":"Server Error: Illegal variable name, The given name 'UdvUfiKs' does not conform to the naming rule /^((::)?[a-z]\w*) ((::)?[a-z_]\w )$/ at /opt/puppetlabs/environments/Linux_nieprodukcja/modules/zabbix_install_lin/manifests/init.pp:16:20 on node napupp01.corpnet.pl","issue_kind":"RUNTIME_ERROR"} SHA-512 I've generated form shell passwd zabbix after adding user zabbix and copied it to manifest. Why do I get this error?
|
How to set user password using SHA-512 hash with Puppet? I want to set user password with Puppet. The code: if ($operatingsystemmajrelease == '7') { group { 'zabbix': name => "zabbix", ensure => "present", } user { 'zabbix': name => "zabbix", groups => "zabbix", password => "$6$UdvUfiKs$rb4XFkCn2h/AUZrJsg2wnRDkOH5E5lliJZXqySVEYUDARFSlWKYHOeMLWycTa2jIMa3XQ3MWtq1EiilBZCbKX.", } } produces an error: Error: Could not retrieve catalog from remote server: Error 500 on SERVER: {"message":"Server Error: Illegal variable name, The given name 'UdvUfiKs' does not conform to the naming rule /^((::)?[a-z]\w*) ((::)?[a-z_]\w )$/ at /opt/puppetlabs/environments/Linux_nieprodukcja/modules/zabbix_install_lin/manifests/init.pp:16:20 on node napupp01.corpnet.pl","issue_kind":"RUNTIME_ERROR"} SHA-512 I've generated form shell passwd zabbix after adding user zabbix and copied it to manifest. Why do I get this error?
|
linux, puppet, redhat
| 2
| 1,341
| 1
|
https://stackoverflow.com/questions/47176545/how-to-set-user-password-using-sha-512-hash-with-puppet
|
39,620,396
|
Connection Issue on Puppet
|
When I do puppet agent -t on the agent, I am seeing the following. It happened recently all of a sudden. Few things to mention: 1. The Puppet master and agents are all up and running. 2. The certificate is successfully signed. Puppet master version 4.3.1 Puppet agent version 3.8.4 OS RedHat, 6 on master, 7 on some agents. Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Connection refused - connect(2) Info: Retrieving pluginfacts Error: /File[/var/lib/puppet/facts.d]: Failed to generate additional resources using 'eval_generate': Connection refused - connect(2) Error: /File[/var/lib/puppet/facts.d]: Could not evaluate: Could not retrieve file metadata for puppet://vengcjn501.mmm.com/pluginfacts: Connection refused - connect(2) Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate': Connection refused - connect(2) Error: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve file metadata for puppet://vengcjn501.mmm.com/plugins: Connection refused - connect(2) Info: Loading facts Error: Could not retrieve catalog from remote server: Connection refused - connect(2) Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Connection refused - connect(2)
|
Connection Issue on Puppet When I do puppet agent -t on the agent, I am seeing the following. It happened recently all of a sudden. Few things to mention: 1. The Puppet master and agents are all up and running. 2. The certificate is successfully signed. Puppet master version 4.3.1 Puppet agent version 3.8.4 OS RedHat, 6 on master, 7 on some agents. Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Connection refused - connect(2) Info: Retrieving pluginfacts Error: /File[/var/lib/puppet/facts.d]: Failed to generate additional resources using 'eval_generate': Connection refused - connect(2) Error: /File[/var/lib/puppet/facts.d]: Could not evaluate: Could not retrieve file metadata for puppet://vengcjn501.mmm.com/pluginfacts: Connection refused - connect(2) Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate': Connection refused - connect(2) Error: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve file metadata for puppet://vengcjn501.mmm.com/plugins: Connection refused - connect(2) Info: Loading facts Error: Could not retrieve catalog from remote server: Connection refused - connect(2) Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Connection refused - connect(2)
|
puppet, redhat
| 2
| 5,922
| 1
|
https://stackoverflow.com/questions/39620396/connection-issue-on-puppet
|
39,215,988
|
Linux script to monitor remote port and launch script if not successful
|
RHEL 7.1 is the OS this will be used on. I have two servers which are identical (A and B). Server B needs to monitor a port on Server A and if it's down for 30 seconds, launch a script. I read netcat was replaced with ncat on RHEL 7 so this is what I have so far: #!/bin/bash Server=10.0.0.1 Port=123 ncat $Server $Port &> /dev/null; echo $? If the port is up, the output is 0. If the port is down, the output is 1. I'm just not sure on how to do the next part which would be "if down for 30 seconds, then launch x script" Any help would be appreciated. Thanks in advance.
|
Linux script to monitor remote port and launch script if not successful RHEL 7.1 is the OS this will be used on. I have two servers which are identical (A and B). Server B needs to monitor a port on Server A and if it's down for 30 seconds, launch a script. I read netcat was replaced with ncat on RHEL 7 so this is what I have so far: #!/bin/bash Server=10.0.0.1 Port=123 ncat $Server $Port &> /dev/null; echo $? If the port is up, the output is 0. If the port is down, the output is 1. I'm just not sure on how to do the next part which would be "if down for 30 seconds, then launch x script" Any help would be appreciated. Thanks in advance.
|
linux, bash, redhat, rhel
| 2
| 1,763
| 3
|
https://stackoverflow.com/questions/39215988/linux-script-to-monitor-remote-port-and-launch-script-if-not-successful
|
38,652,588
|
If I have a local rpm in my ansible-playbook can I do yum install in one step?
|
I have downloaded a rpm in my ansible-playbook: (djangoenv)~/P/c/apache-installer ❯❯❯ tree . . ├── defaults │ └── main.yml ├── files │ ├── apache2latest.tar │ ├── httpd_final.conf │ ├── httpd_temp.conf │ └── sshpass-1.05-9.1.i686.rpm ├── handlers │ └── main.yml ├── hosts ├── meta │ └── main.yml ├── README.md ├── tasks │ └── main.yml ├── templates ├── tests │ ├── inventory │ └── test.yml └── vars └── main.yml My question is why can't I just install it using: - yum: name=files/sshpass-1.05-9.1.i686.rpm ? It complains that files/sshpass-1.05-9.1.i686.rpm is not found in the system. Now I am doing it in two steps: - copy: src=files/sshpass-1.05-9.1.i686.rpm dest=/tmp/sshpass-1.05-9.1.i686.rpm force=no - yum: name=/tmp/sshpass-1.05-9.1.i686.rpm state=present
|
If I have a local rpm in my ansible-playbook can I do yum install in one step? I have downloaded a rpm in my ansible-playbook: (djangoenv)~/P/c/apache-installer ❯❯❯ tree . . ├── defaults │ └── main.yml ├── files │ ├── apache2latest.tar │ ├── httpd_final.conf │ ├── httpd_temp.conf │ └── sshpass-1.05-9.1.i686.rpm ├── handlers │ └── main.yml ├── hosts ├── meta │ └── main.yml ├── README.md ├── tasks │ └── main.yml ├── templates ├── tests │ ├── inventory │ └── test.yml └── vars └── main.yml My question is why can't I just install it using: - yum: name=files/sshpass-1.05-9.1.i686.rpm ? It complains that files/sshpass-1.05-9.1.i686.rpm is not found in the system. Now I am doing it in two steps: - copy: src=files/sshpass-1.05-9.1.i686.rpm dest=/tmp/sshpass-1.05-9.1.i686.rpm force=no - yum: name=/tmp/sshpass-1.05-9.1.i686.rpm state=present
|
redhat, ansible, ansible-2.x
| 2
| 5,042
| 1
|
https://stackoverflow.com/questions/38652588/if-i-have-a-local-rpm-in-my-ansible-playbook-can-i-do-yum-install-in-one-step
|
36,642,002
|
Is it possible to install a Fedora package on RedHat Linux Server?
|
I would like to install KDevelop on RedHat Linux Server v.7. Although there is no an appropriate RPM for RH Server, such an RPM does exist for Fedora. Since both Fedora and RH Server have the same code base and are supported by the same company, is it possible to install a Fedora RPM on RH Server? If yes, how can I do that?
|
Is it possible to install a Fedora package on RedHat Linux Server? I would like to install KDevelop on RedHat Linux Server v.7. Although there is no an appropriate RPM for RH Server, such an RPM does exist for Fedora. Since both Fedora and RH Server have the same code base and are supported by the same company, is it possible to install a Fedora RPM on RH Server? If yes, how can I do that?
|
linux, redhat, fedora, kdevelop
| 2
| 3,599
| 1
|
https://stackoverflow.com/questions/36642002/is-it-possible-to-install-a-fedora-package-on-redhat-linux-server
|
35,485,372
|
How do I run a puppet exec if a condition met?
|
Just to give some details - building on AWS, using Puppet to handle DSC, and have bootstrapped Puppet so that it provisions, and installs Puppet on the newly provisioned node. I've been working with Puppet for a little amount of time now, and I find myself wanting to write a module that only executes on creation of a vm. My particular use case is that I want to install antivirus (specifically, Trend Micro Deep Security) onto a newly provisioned node, automagically via Puppet. The script to run this only requires a download, a run, and a couple of TMDS specific commands to activate itself etc. If I use Puppet, it will do this on every run (download, try to install, try to activate) which is definitely not what I want. However, I don't think that Puppet 'knows' about Trend Micro, or how to get it, or the URL etc. So I can't use something like: service { "trend micro": ensure => running, ensure => present, } Doing some research, and looking at blog posts , I know that the structure of my code should be something along the lines of (I know it's not correct): exec {'function_name': # the script that downloads/installs/activates etc. command => '/path/to/script.sh', onlyif => systemctl service_trendmicro, # which system should return 0 or 1 for pass/fail - I only want it to exec on 0 ofc. } My question is therefore: How do I put this together?
|
How do I run a puppet exec if a condition met? Just to give some details - building on AWS, using Puppet to handle DSC, and have bootstrapped Puppet so that it provisions, and installs Puppet on the newly provisioned node. I've been working with Puppet for a little amount of time now, and I find myself wanting to write a module that only executes on creation of a vm. My particular use case is that I want to install antivirus (specifically, Trend Micro Deep Security) onto a newly provisioned node, automagically via Puppet. The script to run this only requires a download, a run, and a couple of TMDS specific commands to activate itself etc. If I use Puppet, it will do this on every run (download, try to install, try to activate) which is definitely not what I want. However, I don't think that Puppet 'knows' about Trend Micro, or how to get it, or the URL etc. So I can't use something like: service { "trend micro": ensure => running, ensure => present, } Doing some research, and looking at blog posts , I know that the structure of my code should be something along the lines of (I know it's not correct): exec {'function_name': # the script that downloads/installs/activates etc. command => '/path/to/script.sh', onlyif => systemctl service_trendmicro, # which system should return 0 or 1 for pass/fail - I only want it to exec on 0 ofc. } My question is therefore: How do I put this together?
|
exec, puppet, redhat, trend
| 2
| 2,814
| 2
|
https://stackoverflow.com/questions/35485372/how-do-i-run-a-puppet-exec-if-a-condition-met
|
34,853,858
|
vim highlighting everything after <<< in red
|
I'm working on a KSH spcript on RedHat using VIm. Everything after the "<<<" in the following loop is highlighted in red rather than the usual colours. while read line; do #echo $line # TEST ONLY read linetype lettertype f3 group f4 <<< "${line}" if [[ $linetype == "let" ]]; then group_arr["$group"]=1 fi done < "/production/control/pref_file.ini" This is about a quater of the way through a very long script, so I would really like to not have 2000 lines of red if possible! How can I fix it? Looking at this perhaps it isn't recognising the end of the line with the "<<<". Is there a way to force that? Thanks, Ger
|
vim highlighting everything after <<< in red I'm working on a KSH spcript on RedHat using VIm. Everything after the "<<<" in the following loop is highlighted in red rather than the usual colours. while read line; do #echo $line # TEST ONLY read linetype lettertype f3 group f4 <<< "${line}" if [[ $linetype == "let" ]]; then group_arr["$group"]=1 fi done < "/production/control/pref_file.ini" This is about a quater of the way through a very long script, so I would really like to not have 2000 lines of red if possible! How can I fix it? Looking at this perhaps it isn't recognising the end of the line with the "<<<". Is there a way to force that? Thanks, Ger
|
unix, vim, redhat, ksh
| 2
| 509
| 1
|
https://stackoverflow.com/questions/34853858/vim-highlighting-everything-after-in-red
|
32,117,489
|
yum install activemq activemq-client - no package activemq available
|
I am trying to install activemq and activemq-client on RHEL 6.6 with yum. But after typing yum install activemq activemq-client the machine just saying No package activemq available No package activemq-client available Nothing to do. Is there a way to get yum working for this ? Thanks for your help.
|
yum install activemq activemq-client - no package activemq available I am trying to install activemq and activemq-client on RHEL 6.6 with yum. But after typing yum install activemq activemq-client the machine just saying No package activemq available No package activemq-client available Nothing to do. Is there a way to get yum working for this ? Thanks for your help.
|
activemq-classic, redhat, yum
| 2
| 1,539
| 1
|
https://stackoverflow.com/questions/32117489/yum-install-activemq-activemq-client-no-package-activemq-available
|
30,097,627
|
can't install using yum in RHEL 7.1
|
I got a RHEL 7.1 instance on amazon aws, now i am trying to install softwares using yum, but even very common softwares aren't available. For example, $ sudo yum install lynx Loaded plugins: amazon-id, rhui-lb No package lynx available. Error: Nothing to do I am new to linux and yum . What's to be done so I can install softwares easily using yum . Should I be adding repos? Here, I tried doing what's said here -> Top 5 Yum Repositories for CentOS/RHEL 7/6/5 and Fedora , and here -> Install RepoForge (RPMForge) Repository On RHEL, CentOS, Scientific Linux 7/6.x/5.x/4.x but to no use. Appreciate any help.
|
can't install using yum in RHEL 7.1 I got a RHEL 7.1 instance on amazon aws, now i am trying to install softwares using yum, but even very common softwares aren't available. For example, $ sudo yum install lynx Loaded plugins: amazon-id, rhui-lb No package lynx available. Error: Nothing to do I am new to linux and yum . What's to be done so I can install softwares easily using yum . Should I be adding repos? Here, I tried doing what's said here -> Top 5 Yum Repositories for CentOS/RHEL 7/6/5 and Fedora , and here -> Install RepoForge (RPMForge) Repository On RHEL, CentOS, Scientific Linux 7/6.x/5.x/4.x but to no use. Appreciate any help.
|
linux, redhat, yum, rhel, rhel7
| 2
| 6,273
| 2
|
https://stackoverflow.com/questions/30097627/cant-install-using-yum-in-rhel-7-1
|
29,350,334
|
How to increase maximum open file limit in Red Hat Enterprise Linux 5?
|
As the title says. I've found this question: How to increase Neo4j's maximum file open limit (ulimit) in Ubuntu? But I don't even has this file: /etc/init.d/neo4j-service, I'm guessing it's because I'm using RHEL5, not Debian, as the responder was using. Then I've added both two lines: root soft nofile 40000 root hard nofile 40000 into my /etc/security/limits.conf Then after logging out and logging in again, $ulimit -Sn and $ulimit -Hn still returns 1024, Also, I don't even has this file: /etc/pam.d/common-session under pam.d directory. Should I create this file myself and just one that one line in here? I don't think this should be the way out. Any ideas please? Thanks
|
How to increase maximum open file limit in Red Hat Enterprise Linux 5? As the title says. I've found this question: How to increase Neo4j's maximum file open limit (ulimit) in Ubuntu? But I don't even has this file: /etc/init.d/neo4j-service, I'm guessing it's because I'm using RHEL5, not Debian, as the responder was using. Then I've added both two lines: root soft nofile 40000 root hard nofile 40000 into my /etc/security/limits.conf Then after logging out and logging in again, $ulimit -Sn and $ulimit -Hn still returns 1024, Also, I don't even has this file: /etc/pam.d/common-session under pam.d directory. Should I create this file myself and just one that one line in here? I don't think this should be the way out. Any ideas please? Thanks
|
linux, redhat
| 2
| 6,966
| 1
|
https://stackoverflow.com/questions/29350334/how-to-increase-maximum-open-file-limit-in-red-hat-enterprise-linux-5
|
28,727,980
|
/usr/bin/find Argument list too long
|
Trying to search for all files that are excluded from list.txt using the following from the command line find . -type f -name "*.ext" $(printf "! -name %s " $(cat list.txt)) I get the following result -bash: /usr/bin/find: Argument list too long I have also tried with xargs but not sure if I am using it correctly. Any help would be appreciated.
|
/usr/bin/find Argument list too long Trying to search for all files that are excluded from list.txt using the following from the command line find . -type f -name "*.ext" $(printf "! -name %s " $(cat list.txt)) I get the following result -bash: /usr/bin/find: Argument list too long I have also tried with xargs but not sure if I am using it correctly. Any help would be appreciated.
|
linux, shell, find, redhat, xargs
| 2
| 12,504
| 3
|
https://stackoverflow.com/questions/28727980/usr-bin-find-argument-list-too-long
|
28,246,642
|
Make errors - ec_asn1.o - ec_asn1.c:201: error: expected expression before 'X9_62_PENTANOMIAL'
|
I am trying to install a separate version of Openssl 1.0.1k on Red Hat. I tried on Centos first with no real issues. Before getting to this error I did the following: yum install -y libxml2 libxml2-devel libxslt libxslt-devel #not sure if this actually helped with my errors. ./config --prefix=/data/home/jboss/openssl_1.0.1/usr \ --openssldir=/data/home/jboss/openssl_1.0.1/etc/ssl # missing include files. vi ~/.bash_profile C_INCLUDE_PATH=/usr/lib/bcc/include/ CPLUS_INCLUDE_PATH=/usr/lib/bcc/include/ export C_INCLUDE_PATH export CPLUS_INCLUDE_PATH LANG=en_US # make was giving accented a special characters. This fixed. I would set it back once I fixed to en_US.UTF-8. vi /usr/lib/bcc/include/asm/limits.h define INT_MAX 2147483647 After all of that, I am getting the following: .... more on top make[2]: Entering directory /data01/home/s617741/openssl-1.0.1k/crypto/bn' make[2]: Nothing to be done for all'. make[2]: Leaving directory /data01/home/s617741/openssl-1.0.1k/crypto/bn' making all in crypto/ec... make[2]: Entering directory /data01/home/s617741/openssl-1.0.1k/crypto/ec' gcc -I.. -I../.. -I../modes -I../asn1 -I../evp -I../../include -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -Wa,--noexecstack -m64 -DL_ENDIAN -DTERMIO -O3 -Wall -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -c -o ec_asn1.o ec_asn1.c ec_asn1.c:201: warning: implicit declaration of function 'offsetof' ec_asn1.c:201: error: expected expression before 'X9_62_PENTANOMIAL' ec_asn1.c:201: error: initializer element is not constant ec_asn1.c:201: error: (near initialization for 'X9_62_PENTANOMIAL_seq_tt[0].offset') ec_asn1.c:202: error: expected expression before 'X9_62_PENTANOMIAL' ec_asn1.c:202: error: initializer element is not constant ec_asn1.c:202: error: (near initialization for 'X9_62_PENTANOMIAL_seq_tt[1].offset') ec_asn1.c:203: error: expected expression before 'X9_62_PENTANOMIAL' ec_asn1.c:203: error: initializer element is not constant ec_asn1.c:203: error: (near initialization for 'X9_62_PENTANOMIAL_seq_tt[2].offset') ... continues with similar errors. Any insight would help.
|
Make errors - ec_asn1.o - ec_asn1.c:201: error: expected expression before 'X9_62_PENTANOMIAL' I am trying to install a separate version of Openssl 1.0.1k on Red Hat. I tried on Centos first with no real issues. Before getting to this error I did the following: yum install -y libxml2 libxml2-devel libxslt libxslt-devel #not sure if this actually helped with my errors. ./config --prefix=/data/home/jboss/openssl_1.0.1/usr \ --openssldir=/data/home/jboss/openssl_1.0.1/etc/ssl # missing include files. vi ~/.bash_profile C_INCLUDE_PATH=/usr/lib/bcc/include/ CPLUS_INCLUDE_PATH=/usr/lib/bcc/include/ export C_INCLUDE_PATH export CPLUS_INCLUDE_PATH LANG=en_US # make was giving accented a special characters. This fixed. I would set it back once I fixed to en_US.UTF-8. vi /usr/lib/bcc/include/asm/limits.h define INT_MAX 2147483647 After all of that, I am getting the following: .... more on top make[2]: Entering directory /data01/home/s617741/openssl-1.0.1k/crypto/bn' make[2]: Nothing to be done for all'. make[2]: Leaving directory /data01/home/s617741/openssl-1.0.1k/crypto/bn' making all in crypto/ec... make[2]: Entering directory /data01/home/s617741/openssl-1.0.1k/crypto/ec' gcc -I.. -I../.. -I../modes -I../asn1 -I../evp -I../../include -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -Wa,--noexecstack -m64 -DL_ENDIAN -DTERMIO -O3 -Wall -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -c -o ec_asn1.o ec_asn1.c ec_asn1.c:201: warning: implicit declaration of function 'offsetof' ec_asn1.c:201: error: expected expression before 'X9_62_PENTANOMIAL' ec_asn1.c:201: error: initializer element is not constant ec_asn1.c:201: error: (near initialization for 'X9_62_PENTANOMIAL_seq_tt[0].offset') ec_asn1.c:202: error: expected expression before 'X9_62_PENTANOMIAL' ec_asn1.c:202: error: initializer element is not constant ec_asn1.c:202: error: (near initialization for 'X9_62_PENTANOMIAL_seq_tt[1].offset') ec_asn1.c:203: error: expected expression before 'X9_62_PENTANOMIAL' ec_asn1.c:203: error: initializer element is not constant ec_asn1.c:203: error: (near initialization for 'X9_62_PENTANOMIAL_seq_tt[2].offset') ... continues with similar errors. Any insight would help.
|
linux, redhat, openssl
| 2
| 390
| 1
|
https://stackoverflow.com/questions/28246642/make-errors-ec-asn1-o-ec-asn1-c201-error-expected-expression-before-x9-6
|
27,376,606
|
How debug akka association porcess?
|
Here is a scenario: I have packaged scala project with spray into jar file. Launch jar file on RedHat 6.5 on Virtual Box (ip - 192.168.1. 38 ) Launch jar file on RedHat 6.5 on Virtual Box (ip - 192.168.1. 41 ) Everything works locally - I can send REST request to each virtual machine and get response. Problem Akka systems can not became to cluster. I run 192.168.1. 38 with default settings, but 192.168.1. 41 have an additional property - akka.cluster.seed-nodes which is set to akka.tcp://mySystem@192.168.1.38:2551 . So I get: [WARN] [12/09/2014 17:10:24.043] [mySystem-akka.remote.default-remote-dispatcher-8] [akka.tcp://mySystem@192.168.1.41:2551/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FmySystem%40192.168.1.38%3A2551-0] Association with remote system [akka.tcp://mySystem@192.168.1.38:2551] has failed, address is now gated for [5000] ms. Reason is: [Association failed with [akka.tcp://mySystem@192.168.1.38:2551]]. No other errors or warning. Also how can I test akka association or print debug akka association settings? Also can linux settings influence to akka association?
|
How debug akka association porcess? Here is a scenario: I have packaged scala project with spray into jar file. Launch jar file on RedHat 6.5 on Virtual Box (ip - 192.168.1. 38 ) Launch jar file on RedHat 6.5 on Virtual Box (ip - 192.168.1. 41 ) Everything works locally - I can send REST request to each virtual machine and get response. Problem Akka systems can not became to cluster. I run 192.168.1. 38 with default settings, but 192.168.1. 41 have an additional property - akka.cluster.seed-nodes which is set to akka.tcp://mySystem@192.168.1.38:2551 . So I get: [WARN] [12/09/2014 17:10:24.043] [mySystem-akka.remote.default-remote-dispatcher-8] [akka.tcp://mySystem@192.168.1.41:2551/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FmySystem%40192.168.1.38%3A2551-0] Association with remote system [akka.tcp://mySystem@192.168.1.38:2551] has failed, address is now gated for [5000] ms. Reason is: [Association failed with [akka.tcp://mySystem@192.168.1.38:2551]]. No other errors or warning. Also how can I test akka association or print debug akka association settings? Also can linux settings influence to akka association?
|
scala, akka, virtualbox, redhat
| 2
| 2,535
| 1
|
https://stackoverflow.com/questions/27376606/how-debug-akka-association-porcess
|
25,908,298
|
wkhtmltopdf custom font letter spacing
|
I'm running wkhtmltopdf on linux server (centos.10.x86_64). I'm trying to add "Times New Roman" font to the page. I see the fonts but on some font sizes it adds spaces between the letters . I tried setting the font by installing it on the machine (ttf) or by calling an external odf that I converted from the ttf or by adding it with base64 (css). It looks good on all, but it inserts spaces between the laters. I also tried to the dpi parameter but still the spaces are generated. Generating the same pdf over MAC works perfectly (probably because the font comes with the machine) Why does it happen and how can it be fixed ? Thanks. The image attached describes the bug. No spaces added in each of the fonts group. The following the image text abcdefg hijklmno pqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ
|
wkhtmltopdf custom font letter spacing I'm running wkhtmltopdf on linux server (centos.10.x86_64). I'm trying to add "Times New Roman" font to the page. I see the fonts but on some font sizes it adds spaces between the letters . I tried setting the font by installing it on the machine (ttf) or by calling an external odf that I converted from the ttf or by adding it with base64 (css). It looks good on all, but it inserts spaces between the laters. I also tried to the dpi parameter but still the spaces are generated. Generating the same pdf over MAC works perfectly (probably because the font comes with the machine) Why does it happen and how can it be fixed ? Thanks. The image attached describes the bug. No spaces added in each of the fonts group. The following the image text abcdefg hijklmno pqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ
|
pdf, fonts, redhat, wkhtmltopdf
| 2
| 2,686
| 1
|
https://stackoverflow.com/questions/25908298/wkhtmltopdf-custom-font-letter-spacing
|
25,667,373
|
Variable substitution in tcsh
|
I've gone through the manual for the tcsh but still can't figure out how it should work in my case or whether it should work at all. I basically need to extract part of the variable whose value is a six digit number. So I need to drop the first two characters and retrieve the last four. The example below doesn't work (it would probably work in bash but tcsh HAS to be used): set VAR1 = value1 set VAR2 = echo ${VAR1:2} echo VAR2 It comes up with error Bad : modifier in $ (2) , apparently because it's bash syntax and not understandable by tcsh, but can't figure out how to do it with tcsh arguments.
|
Variable substitution in tcsh I've gone through the manual for the tcsh but still can't figure out how it should work in my case or whether it should work at all. I basically need to extract part of the variable whose value is a six digit number. So I need to drop the first two characters and retrieve the last four. The example below doesn't work (it would probably work in bash but tcsh HAS to be used): set VAR1 = value1 set VAR2 = echo ${VAR1:2} echo VAR2 It comes up with error Bad : modifier in $ (2) , apparently because it's bash syntax and not understandable by tcsh, but can't figure out how to do it with tcsh arguments.
|
redhat, tcsh
| 2
| 1,518
| 3
|
https://stackoverflow.com/questions/25667373/variable-substitution-in-tcsh
|
25,078,616
|
Redhat "httpd" can not start anymore. Showing "suEXEC" and "SELinux" notices
|
I'm on RHEL 6.5 and Apache 2.2.15 . When i now restart the HTTPD, i can not start that httpd anymore. Showing following things in the /var/log/httpd/error_log : [Fri Aug 01 18:31:48 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Aug 01 18:32:35 2014] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0 [Fri Aug 01 18:32:35 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Aug 01 18:42:46 2014] [notice] SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0 [Fri Aug 01 18:42:46 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Aug 01 18:43:15 2014] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0 [Fri Aug 01 18:43:15 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Aug 01 18:43:59 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Aug 01 18:44:12 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Aug 01 18:45:03 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) Actually i have already disabled the SELinux. What should i do please?
|
Redhat "httpd" can not start anymore. Showing "suEXEC" and "SELinux" notices I'm on RHEL 6.5 and Apache 2.2.15 . When i now restart the HTTPD, i can not start that httpd anymore. Showing following things in the /var/log/httpd/error_log : [Fri Aug 01 18:31:48 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Aug 01 18:32:35 2014] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0 [Fri Aug 01 18:32:35 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Aug 01 18:42:46 2014] [notice] SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0 [Fri Aug 01 18:42:46 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Aug 01 18:43:15 2014] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0 [Fri Aug 01 18:43:15 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Aug 01 18:43:59 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Aug 01 18:44:12 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Aug 01 18:45:03 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) Actually i have already disabled the SELinux. What should i do please?
|
apache, redhat, rhel, selinux, suexec
| 2
| 8,800
| 1
|
https://stackoverflow.com/questions/25078616/redhat-httpd-can-not-start-anymore-showing-suexec-and-selinux-notices
|
24,062,375
|
How to build the rpm package with SHA-256 checksum for files?
|
In standard alone RHEL 6.4 rpm build environment, the rpm packages is generated with SHA-256 check sum, which is gotten by command rpm -qp --dump xxx.rpm [user@redhat64 abc]$ rpm -qp --dump package/rpm/abc-1.0.1-1.x86_64.rpm .. /opt/company/abc/abc/1.0.1-1/bin/start.sh 507 1398338016 d8820685b6446ee36a85cc1f7387d14537d6f8bf5ce4c5a4ccd2f70e9066c859 0100750 user abcc 0 .. While if it is build in docker environment (still RHEL6.4) the checksum is md5 [user@c1cbdf51d189 abc]$ rpm -qp --dump package/rpm/abc-1.0.1-1.x86_64.rpm .. /opt/company/abc/abc/1.0.1-1/bin/start.sh 507 1401952578 f229759944ba77c3c8ba2982c55bbe70 0100750 user abcc 0 .. If I checked the real file, the file is the same [user@c1cbdf51d189 1.0.1-1]$ sha256sum bin/start.sh d8820685b6446ee36a85cc1f7387d14537d6f8bf5ce4c5a4ccd2f70e9066c859 bin/start.sh [user@c1cbdf51d189 1.0.1-1]$ md5sum bin/start.sh f229759944ba77c3c8ba2982c55bbe70 bin/start.sh How I configure rpmbuild to let generated rpm file is SHA-256 based ?
|
How to build the rpm package with SHA-256 checksum for files? In standard alone RHEL 6.4 rpm build environment, the rpm packages is generated with SHA-256 check sum, which is gotten by command rpm -qp --dump xxx.rpm [user@redhat64 abc]$ rpm -qp --dump package/rpm/abc-1.0.1-1.x86_64.rpm .. /opt/company/abc/abc/1.0.1-1/bin/start.sh 507 1398338016 d8820685b6446ee36a85cc1f7387d14537d6f8bf5ce4c5a4ccd2f70e9066c859 0100750 user abcc 0 .. While if it is build in docker environment (still RHEL6.4) the checksum is md5 [user@c1cbdf51d189 abc]$ rpm -qp --dump package/rpm/abc-1.0.1-1.x86_64.rpm .. /opt/company/abc/abc/1.0.1-1/bin/start.sh 507 1401952578 f229759944ba77c3c8ba2982c55bbe70 0100750 user abcc 0 .. If I checked the real file, the file is the same [user@c1cbdf51d189 1.0.1-1]$ sha256sum bin/start.sh d8820685b6446ee36a85cc1f7387d14537d6f8bf5ce4c5a4ccd2f70e9066c859 bin/start.sh [user@c1cbdf51d189 1.0.1-1]$ md5sum bin/start.sh f229759944ba77c3c8ba2982c55bbe70 bin/start.sh How I configure rpmbuild to let generated rpm file is SHA-256 based ?
|
redhat, rpm, docker, rpmbuild
| 2
| 5,963
| 1
|
https://stackoverflow.com/questions/24062375/how-to-build-the-rpm-package-with-sha-256-checksum-for-files
|
23,967,809
|
Is there a way to add proxied redhat maven repository to nexus?
|
There is a situation that I cannot not resolve on my own. I have Sonatype Nexus™ 2.8.0-05 and I want to add redhat's public maven repository ( [URL] ) as a proxied repo. This repo does not have index, but I thought that should not be a problem. So, I add it as a proxy repo, disable download of remote index (I've tried both ways, so this does not matter), other stuff I leave by default. I can see packages in "browse remote", but for some reason I can't see them while searching (i.e. 'org.jboss.as:jboss-as-security:jar:7.3.0.Final-redhat-14'). Also automatic routing does not work - "No scraper was able to scrape remote (or remote prevents scraping).". Do anyone have a solution? Thanks.
|
Is there a way to add proxied redhat maven repository to nexus? There is a situation that I cannot not resolve on my own. I have Sonatype Nexus™ 2.8.0-05 and I want to add redhat's public maven repository ( [URL] ) as a proxied repo. This repo does not have index, but I thought that should not be a problem. So, I add it as a proxy repo, disable download of remote index (I've tried both ways, so this does not matter), other stuff I leave by default. I can see packages in "browse remote", but for some reason I can't see them while searching (i.e. 'org.jboss.as:jboss-as-security:jar:7.3.0.Final-redhat-14'). Also automatic routing does not work - "No scraper was able to scrape remote (or remote prevents scraping).". Do anyone have a solution? Thanks.
|
maven, redhat, nexus
| 2
| 1,456
| 1
|
https://stackoverflow.com/questions/23967809/is-there-a-way-to-add-proxied-redhat-maven-repository-to-nexus
|
23,076,835
|
Query recently uninstalled rpm packages
|
There's a option in rpm to query recently installed packages using rpm -q --last But, is there a option to query recently uninstalled package ?
|
Query recently uninstalled rpm packages There's a option in rpm to query recently installed packages using rpm -q --last But, is there a option to query recently uninstalled package ?
|
linux, redhat, rpm
| 2
| 7,803
| 2
|
https://stackoverflow.com/questions/23076835/query-recently-uninstalled-rpm-packages
|
22,073,252
|
%{__python} percent sign curly braces variable in cobbler spec file
|
I'm trying to understand how the cobbler spec file works. The first lines are: %{!?python_sitelib: %define python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")} %{!?pyver: %define pyver %(%{__python} -c "import sys ; print sys.version[:3]" || echo 0)} I guess my main question is where does the %{__python} variable come from? And if I change it to %{__python26} , I get the following error sh: line 0: fg: no job control
|
%{__python} percent sign curly braces variable in cobbler spec file I'm trying to understand how the cobbler spec file works. The first lines are: %{!?python_sitelib: %define python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")} %{!?pyver: %define pyver %(%{__python} -c "import sys ; print sys.version[:3]" || echo 0)} I guess my main question is where does the %{__python} variable come from? And if I change it to %{__python26} , I get the following error sh: line 0: fg: no job control
|
python, shell, redhat, specifications
| 2
| 483
| 1
|
https://stackoverflow.com/questions/22073252/python-percent-sign-curly-braces-variable-in-cobbler-spec-file
|
22,050,454
|
kernel module name with _ and -?
|
Why do I have this names in my redhat 5 server? [root@sanserver ~]# lsmod | grep multipath dm_multipath 58969 2 dm_round_robin scsi_dh 42561 1 dm_multipath **#Module name with _** dm_mod 103569 28 dm_multipath,dm_raid45,dm_snapshot,dm_zero,dm_mirror,dm_log [root@sanserver ~]# modinfo dm_multipath filename: /lib/modules/2.6.18-371.3.1.el5xen/kernel/drivers/md/dm-multipath.ko **#name with -** license: GPL author: Sistina Software <dm-devel@redhat.com> description: device-mapper multipath target srcversion: 4BAFD78E7E55F1ECEFAE485 depends: scsi_dh,dm-mod vermagic: 2.6.18-371.3.1.el5xen SMP mod_unload gcc-4.1 module_sig: 883f350528095c4b83fbebdcf4f8e511246ad0a0aac4dc3d4f69ff19b5be180209ffe5e468361309f5db06e141919e5eb76dbd14e2c5539390c54bd4 I got two differents names, but there is not alias, one is dm-multipath and second dm_multipath
|
kernel module name with _ and -? Why do I have this names in my redhat 5 server? [root@sanserver ~]# lsmod | grep multipath dm_multipath 58969 2 dm_round_robin scsi_dh 42561 1 dm_multipath **#Module name with _** dm_mod 103569 28 dm_multipath,dm_raid45,dm_snapshot,dm_zero,dm_mirror,dm_log [root@sanserver ~]# modinfo dm_multipath filename: /lib/modules/2.6.18-371.3.1.el5xen/kernel/drivers/md/dm-multipath.ko **#name with -** license: GPL author: Sistina Software <dm-devel@redhat.com> description: device-mapper multipath target srcversion: 4BAFD78E7E55F1ECEFAE485 depends: scsi_dh,dm-mod vermagic: 2.6.18-371.3.1.el5xen SMP mod_unload gcc-4.1 module_sig: 883f350528095c4b83fbebdcf4f8e511246ad0a0aac4dc3d4f69ff19b5be180209ffe5e468361309f5db06e141919e5eb76dbd14e2c5539390c54bd4 I got two differents names, but there is not alias, one is dm-multipath and second dm_multipath
|
linux, linux-kernel, linux-device-driver, redhat
| 2
| 543
| 1
|
https://stackoverflow.com/questions/22050454/kernel-module-name-with-and
|
21,397,715
|
java.net.SocketException: Network is unreachable: connect Response data in JMeter
|
I am trying to send HTTP request through HTTP sampler to linux server (Red Hat), in local intranet environment. I am getting this exception in Response Data Please guide me. Regards
|
java.net.SocketException: Network is unreachable: connect Response data in JMeter I am trying to send HTTP request through HTTP sampler to linux server (Red Hat), in local intranet environment. I am getting this exception in Response Data Please guide me. Regards
|
java, linux, network-programming, jmeter, redhat
| 2
| 3,235
| 2
|
https://stackoverflow.com/questions/21397715/java-net-socketexception-network-is-unreachable-connect-response-data-in-jmete
|
21,137,831
|
uberSVN branch post-commit hook
|
We are using uberSVN installed on linux. In the repository " R " we have different branches and I need to trigger the jenkins job for a commit on a specific branch " B ". In the ....repository/R/hooks/ there is file named post-commit . The file content is below: REPOS="$1" REV="$2" wget "[URL] The above script calls wget whenever repo has been commited. On the other hand, I want to trigger "the branch job" if and only if there is a commit on branch " B " not all repository. The jenkis url is below: wget "[URL] What is the proper way to do this?
|
uberSVN branch post-commit hook We are using uberSVN installed on linux. In the repository " R " we have different branches and I need to trigger the jenkins job for a commit on a specific branch " B ". In the ....repository/R/hooks/ there is file named post-commit . The file content is below: REPOS="$1" REV="$2" wget "[URL] The above script calls wget whenever repo has been commited. On the other hand, I want to trigger "the branch job" if and only if there is a commit on branch " B " not all repository. The jenkis url is below: wget "[URL] What is the proper way to do this?
|
svn, jenkins, redhat, post-commit-hook, ubersvn
| 2
| 788
| 2
|
https://stackoverflow.com/questions/21137831/ubersvn-branch-post-commit-hook
|
16,336,305
|
How to install python packages on python2.7 using yum
|
Here is my question: How to install python packages on python2.7 using yum, as yum by default installs packages on 2.4 (the default python on redhat). OR How can I install yum on python2.7 to make its default installation is python2.7
|
How to install python packages on python2.7 using yum Here is my question: How to install python packages on python2.7 using yum, as yum by default installs packages on 2.4 (the default python on redhat). OR How can I install yum on python2.7 to make its default installation is python2.7
|
python, pip, redhat, yum
| 2
| 1,165
| 1
|
https://stackoverflow.com/questions/16336305/how-to-install-python-packages-on-python2-7-using-yum
|
16,182,687
|
GCC backwards compatibility
|
I am porting an application to a red hat enterprise 5 server, and the server has GCC v4.1.2 installed. I need GCC 4.2, and 4.1.2 is the newest version in the yum network. If I download a newer .repo file and run yum install to update it, is there any chance that the install would cause dependency failures with older applications running on the server? I don't feel like it would, but I'm not positive, and this is my first time working on a live server and I don't want to mess anything up. Is it safe to just go for it? Thanks for the advice!
|
GCC backwards compatibility I am porting an application to a red hat enterprise 5 server, and the server has GCC v4.1.2 installed. I need GCC 4.2, and 4.1.2 is the newest version in the yum network. If I download a newer .repo file and run yum install to update it, is there any chance that the install would cause dependency failures with older applications running on the server? I don't feel like it would, but I'm not positive, and this is my first time working on a live server and I don't want to mess anything up. Is it safe to just go for it? Thanks for the advice!
|
gcc, g++, redhat, yum, rhel
| 2
| 1,858
| 1
|
https://stackoverflow.com/questions/16182687/gcc-backwards-compatibility
|
14,477,529
|
Mercurial: unable to use environment variables in .hgrc file
|
I have two machines running Mercurial, a Solaris system and a Red Hat system. On the Solaris system I can use environment variables in the .hgrc file, but on the Red Hat system it doesn't seem to work. I have the following example in the .hgrc file: [ui] username = $SUDO_USER but hg log shows me the following: user: $SUDO_USER The variable is set and is exported: $ env|grep SUDO_USER SUDO_USER=testuser The same setup works fine on the Solaris system. Can anyone tell me why this doesn't work?
|
Mercurial: unable to use environment variables in .hgrc file I have two machines running Mercurial, a Solaris system and a Red Hat system. On the Solaris system I can use environment variables in the .hgrc file, but on the Red Hat system it doesn't seem to work. I have the following example in the .hgrc file: [ui] username = $SUDO_USER but hg log shows me the following: user: $SUDO_USER The variable is set and is exported: $ env|grep SUDO_USER SUDO_USER=testuser The same setup works fine on the Solaris system. Can anyone tell me why this doesn't work?
|
unix, mercurial, redhat, hgrc
| 2
| 634
| 1
|
https://stackoverflow.com/questions/14477529/mercurial-unable-to-use-environment-variables-in-hgrc-file
|
13,853,507
|
How to install C compiler for GCC without Internet connection? (RHEL6)
|
I'm attempting to build GCC from source on a RHEL6 virtual machine, and have run into a Catch 22. That is, I need a C compiler for successful configuration. The solution seems simple enough - execute yum to solve dependencies. However, this virtual machine cannot have an Internet connection. Does anybody have any sources for a binary or .rpm containing a pre-compiled compiler, simplifying installation? I've searched, but cannot find one. Alternatively, does a RHEL6 command exist to install a pre-compiled version of GCC? If neither are possible, what C compilers might I pursue to resolve this? For context, here's the message I receive: ../gcc-4.7.2/configure checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether ln works... yes checking whether ln -s works... yes checking for a sed that does not truncate output... /bin/sed checking for gawk... gawk checking for libitm support... yes checking for gcc... no checking for cc... no checking for cl.exe... no configure: error: in /gcc/gcc-build': configure: error: no acceptable C compiler found in $PATH See config.log' for more details.
|
How to install C compiler for GCC without Internet connection? (RHEL6) I'm attempting to build GCC from source on a RHEL6 virtual machine, and have run into a Catch 22. That is, I need a C compiler for successful configuration. The solution seems simple enough - execute yum to solve dependencies. However, this virtual machine cannot have an Internet connection. Does anybody have any sources for a binary or .rpm containing a pre-compiled compiler, simplifying installation? I've searched, but cannot find one. Alternatively, does a RHEL6 command exist to install a pre-compiled version of GCC? If neither are possible, what C compilers might I pursue to resolve this? For context, here's the message I receive: ../gcc-4.7.2/configure checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether ln works... yes checking whether ln -s works... yes checking for a sed that does not truncate output... /bin/sed checking for gawk... gawk checking for libitm support... yes checking for gcc... no checking for cc... no checking for cl.exe... no configure: error: in /gcc/gcc-build': configure: error: no acceptable C compiler found in $PATH See config.log' for more details.
|
linux, gcc, redhat
| 2
| 9,856
| 2
|
https://stackoverflow.com/questions/13853507/how-to-install-c-compiler-for-gcc-without-internet-connection-rhel6
|
13,395,637
|
Remote IDE on Red Hat Enterprise Linux Server release 5.5
|
I have this version of Linux server: -bash-3.2$ cat /proc/version Linux version 2.6.18-194.11.1.el5 (mockbuild@hs20-bc2-3.build.redhat.com) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Tue Jul 27 05:45:06 EDT 2010 -bash-3.2$ cat /etc/*release* cat: /etc/lsb-release.d: Is a directory Red Hat Enterprise Linux Server release 5.5 (Tikanga) Currently, I am writing c program on the Linux side , I will need the server power to execute my program. I prefer IDE , but since my machine is Windows and what not, I have to compile the program remotely on the server . Sometimes, it's such a pain that I cannot run a stacktrace after the program crashes. And the thing is I want is to achieve higher productivity. I can only access this server with PuTTY or the like, and I do not have the rights to install any software. And updating the software in the server is also not possible. I see that the server got programs like Matlab that can output to XMing on the client side. (Ex. I can run Matlab as a GUI from the server side and have it display on my client device) I see that some people suggest me for Eclipse, but the IDE is way too slow. In fact, it lowers productivity. So is there any recommendation or a scheme that will allow me to compile, execute and debug my program remotely on the server with better ease-of-use, given the bold criteria above?
|
Remote IDE on Red Hat Enterprise Linux Server release 5.5 I have this version of Linux server: -bash-3.2$ cat /proc/version Linux version 2.6.18-194.11.1.el5 (mockbuild@hs20-bc2-3.build.redhat.com) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Tue Jul 27 05:45:06 EDT 2010 -bash-3.2$ cat /etc/*release* cat: /etc/lsb-release.d: Is a directory Red Hat Enterprise Linux Server release 5.5 (Tikanga) Currently, I am writing c program on the Linux side , I will need the server power to execute my program. I prefer IDE , but since my machine is Windows and what not, I have to compile the program remotely on the server . Sometimes, it's such a pain that I cannot run a stacktrace after the program crashes. And the thing is I want is to achieve higher productivity. I can only access this server with PuTTY or the like, and I do not have the rights to install any software. And updating the software in the server is also not possible. I see that the server got programs like Matlab that can output to XMing on the client side. (Ex. I can run Matlab as a GUI from the server side and have it display on my client device) I see that some people suggest me for Eclipse, but the IDE is way too slow. In fact, it lowers productivity. So is there any recommendation or a scheme that will allow me to compile, execute and debug my program remotely on the server with better ease-of-use, given the bold criteria above?
|
linux, ide, remote-debugging, redhat, remote-access
| 2
| 322
| 1
|
https://stackoverflow.com/questions/13395637/remote-ide-on-red-hat-enterprise-linux-server-release-5-5
|
11,635,221
|
CentOS use SNMP to show interface useage
|
I have an SNMP monitoring box and want to monitor interface utilisation on a clustered database server. I'm trying to work out the correct OID to monitor - I just need SNMP to return the total interface throughput at a given time. The SNMP box is already configured and will correctly graph it. All howtos I can find talk about setting up Catci or MRTG which is all well and good, but what I need seems simpler, yet I can't seem to find what I'm looking for. The SNMP box is already configured with the correct community name etc so this should be a really easy one in theory. Any help very gratefully received Thanks
|
CentOS use SNMP to show interface useage I have an SNMP monitoring box and want to monitor interface utilisation on a clustered database server. I'm trying to work out the correct OID to monitor - I just need SNMP to return the total interface throughput at a given time. The SNMP box is already configured and will correctly graph it. All howtos I can find talk about setting up Catci or MRTG which is all well and good, but what I need seems simpler, yet I can't seem to find what I'm looking for. The SNMP box is already configured with the correct community name etc so this should be a really easy one in theory. Any help very gratefully received Thanks
|
linux, centos, snmp, redhat, cacti
| 2
| 17,012
| 3
|
https://stackoverflow.com/questions/11635221/centos-use-snmp-to-show-interface-useage
|
11,163,608
|
How to list all files belonged to different branches in Mercurial?
|
I am new to the mercurial so sorry for the newbie question. I have created a local repository called "localRepo" at /home/Cassie/localRepo. I have two branches there, default and src1-branch. For default branch, it has file1,file3 and for src1-branch, it has file1, file2 and file4. Whenever I tried to list all files at that repository, it only shows the files belong to the current branch. For example, if current branch is src1-branch, then if I typed ls -l It showed only file1 file2 file4 Is there any way to see all files belong to the same repository such as file1 file2 file3 file4 I have tried hg status --all It still only showed file1 file2 and file4. My machine is redhat linux workstation 6 with mercurial 1.7 and tortoisehg. Thank you very much,
|
How to list all files belonged to different branches in Mercurial? I am new to the mercurial so sorry for the newbie question. I have created a local repository called "localRepo" at /home/Cassie/localRepo. I have two branches there, default and src1-branch. For default branch, it has file1,file3 and for src1-branch, it has file1, file2 and file4. Whenever I tried to list all files at that repository, it only shows the files belong to the current branch. For example, if current branch is src1-branch, then if I typed ls -l It showed only file1 file2 file4 Is there any way to see all files belong to the same repository such as file1 file2 file3 file4 I have tried hg status --all It still only showed file1 file2 and file4. My machine is redhat linux workstation 6 with mercurial 1.7 and tortoisehg. Thank you very much,
|
linux, mercurial, branch, tortoisehg, redhat
| 2
| 726
| 2
|
https://stackoverflow.com/questions/11163608/how-to-list-all-files-belonged-to-different-branches-in-mercurial
|
9,432,305
|
How to Add SSL Site to Zend Server
|
I'm trying to add a secure site to Zend. When I go to the Zend server site at [URL] I can see, under server extensions, "openssl built-in, ON". When I add SSLEngine On to the httpd.conf I get ... Invalid command 'SSLEngine', perhaps misspelled or defined by a module not included in the server configuration Missing this line out gives.... Invalid command 'SSLCertificateChainFile', perhaps misspelled or defined by a module not included in the server configuration. Appendix F at Zend's site says Uncomment the following line... Include conf/extra/httpd-ssl.conf But that line is not in my conf file and nor is the path indicated. The directory /usr/lib64/httpd/modules does not have a file called mod_ssl.so or similar. This is Zend 5.5 on Red Hat PHP Version 5.3.8-ZS5.5.0 Zend Framework Version 1.11.10 My manager says it was a pretty standard installation. Any help would be great. Thanks.
|
How to Add SSL Site to Zend Server I'm trying to add a secure site to Zend. When I go to the Zend server site at [URL] I can see, under server extensions, "openssl built-in, ON". When I add SSLEngine On to the httpd.conf I get ... Invalid command 'SSLEngine', perhaps misspelled or defined by a module not included in the server configuration Missing this line out gives.... Invalid command 'SSLCertificateChainFile', perhaps misspelled or defined by a module not included in the server configuration. Appendix F at Zend's site says Uncomment the following line... Include conf/extra/httpd-ssl.conf But that line is not in my conf file and nor is the path indicated. The directory /usr/lib64/httpd/modules does not have a file called mod_ssl.so or similar. This is Zend 5.5 on Red Hat PHP Version 5.3.8-ZS5.5.0 Zend Framework Version 1.11.10 My manager says it was a pretty standard installation. Any help would be great. Thanks.
|
php, apache, ssl, redhat, zend-server
| 2
| 4,374
| 1
|
https://stackoverflow.com/questions/9432305/how-to-add-ssl-site-to-zend-server
|
8,731,179
|
How to start a script based on an event?
|
I'm in a red hat environment. I need to move a file from server A to server B when a file is available in a folder F. THere's no constraint on the method used. Is it possible to trigger this event in python or any other scripts? It could be run as a daemon but I'm not sure how to do that. Any advices?
|
How to start a script based on an event? I'm in a red hat environment. I need to move a file from server A to server B when a file is available in a folder F. THere's no constraint on the method used. Is it possible to trigger this event in python or any other scripts? It could be run as a daemon but I'm not sure how to do that. Any advices?
|
python, bash, redhat
| 2
| 416
| 6
|
https://stackoverflow.com/questions/8731179/how-to-start-a-script-based-on-an-event
|
8,370,349
|
Why shellcode does not work?
|
I am trying to do a buffer flow exploit demo. I want to use shell code to overflow a stack and get a sh session. I can follow the tutorial from here [URL] and even produce exactly the same shellcode. But I am not able to make the shellcode run using shellcode3.c as in the tutorial. What I got is always "Segmentation Fault". I am using "Red Hat Enterprise Linux AS release 4 (Nahant Update 4)". I want to know is there anyone ever make it work using similar method? Do I need to change to other system?
|
Why shellcode does not work? I am trying to do a buffer flow exploit demo. I want to use shell code to overflow a stack and get a sh session. I can follow the tutorial from here [URL] and even produce exactly the same shellcode. But I am not able to make the shellcode run using shellcode3.c as in the tutorial. What I got is always "Segmentation Fault". I am using "Red Hat Enterprise Linux AS release 4 (Nahant Update 4)". I want to know is there anyone ever make it work using similar method? Do I need to change to other system?
|
security, redhat, buffer-overflow, shellcode
| 2
| 4,117
| 3
|
https://stackoverflow.com/questions/8370349/why-shellcode-does-not-work
|
1,379,127
|
Red hat compatibility
|
The following code works as expected on CentOS and Ubuntu O/s but not on Red hat. What changes needs to be made? CentOS release 5.3 (Final) Linux ubuntu 2.6.24-19-generic #1 SMP Wed Jun 18 14:43:41 UTC 2008 i686 GNU/Linux #!/bin/bash depot=$1 table=$2 database=$3 combined="$depot$table" if [ "$table" = 'routes' -o "$table" = 'other_routes' ]; then echo 'first if successful' elif [ "$table" = 'bus_stops' ]; then echo 'elif successful' else echo 'else succsesful' fi
|
Red hat compatibility The following code works as expected on CentOS and Ubuntu O/s but not on Red hat. What changes needs to be made? CentOS release 5.3 (Final) Linux ubuntu 2.6.24-19-generic #1 SMP Wed Jun 18 14:43:41 UTC 2008 i686 GNU/Linux #!/bin/bash depot=$1 table=$2 database=$3 combined="$depot$table" if [ "$table" = 'routes' -o "$table" = 'other_routes' ]; then echo 'first if successful' elif [ "$table" = 'bus_stops' ]; then echo 'elif successful' else echo 'else succsesful' fi
|
shell, redhat
| 2
| 259
| 3
|
https://stackoverflow.com/questions/1379127/red-hat-compatibility
|
489,555
|
What database tool to use on Linux to read an as/400 database?
|
From Linux (Red Hat dist), we need to read an AS400 database. We have the ODBC driver to connect, what's the best query tool?
|
What database tool to use on Linux to read an as/400 database? From Linux (Red Hat dist), we need to read an AS400 database. We have the ODBC driver to connect, what's the best query tool?
|
linux, odbc, ibm-midrange, redhat
| 2
| 836
| 2
|
https://stackoverflow.com/questions/489555/what-database-tool-to-use-on-linux-to-read-an-as-400-database
|
79,629,055
|
gdb does not autocomplete with shell commands
|
After recently updating gdb from GNU gdb (GDB) Red Hat Enterprise Linux 8.2-20.el8 to GNU gdb (GDB) 16.3 , gdb is not performing autocomplete specifically when running a shell command. e.g. (gdb) !ls thisfile.txt (gdb) !ls this<tab> tab is not autocompleting the file. This fails for any shell command, but works fine for any gdb command. This used to work on version 8.2. Does anyone have any ideas on how I can bring back autocompletion for shell commands in gdb? The actual shell commands work fine, but I have to manually type out the file. Edit: I found out that autocomplete works when I don't include any shell commands. E.G. from the example above: (gdb) !this<tab> autocompletes to (gdb) !thisfile.txt . But once I pass in a shell command, autocomplete is disabled.
|
gdb does not autocomplete with shell commands After recently updating gdb from GNU gdb (GDB) Red Hat Enterprise Linux 8.2-20.el8 to GNU gdb (GDB) 16.3 , gdb is not performing autocomplete specifically when running a shell command. e.g. (gdb) !ls thisfile.txt (gdb) !ls this<tab> tab is not autocompleting the file. This fails for any shell command, but works fine for any gdb command. This used to work on version 8.2. Does anyone have any ideas on how I can bring back autocompletion for shell commands in gdb? The actual shell commands work fine, but I have to manually type out the file. Edit: I found out that autocomplete works when I don't include any shell commands. E.G. from the example above: (gdb) !this<tab> autocompletes to (gdb) !thisfile.txt . But once I pass in a shell command, autocomplete is disabled.
|
c++, autocomplete, gdb, redhat
| 2
| 170
| 1
|
https://stackoverflow.com/questions/79629055/gdb-does-not-autocomplete-with-shell-commands
|
79,026,688
|
oci_connect(): OCIEnvNlsCreate() failed in RHEL 8 with HTTPD
|
I need your help to connect to Oracle in PHP8 on my Red Hat 8.9 server. OCI8 is enabled and the HTTPD web server is installed. This is my code: <?php // Enable error reporting in PHP error_reporting(E_ALL); ini_set('display_errors', 1); // Connect to the Oracle database using oci_connect $conn = oci_connect("toakdbi", "toakdbi_123", "(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=AvaniServer)(PORT=1521)))(CONNE_DATA=(SID=orcl)))"); if (!$conn) { $e = oci_error(); echo "Connection failed: " . $e['message']; } else { echo "</br>Connected to Oracle!"; } ?> This is the error I got: Warning: oci_connect(): OCIEnvNlsCreate() failed. There is something wrong with your system - please check that LD_LIBRARY_PATH includes the directory with Oracle Instant Client libraries in /var/www/html/db2.php on line 10 Warning: oci_connect(): Error while trying to retrieve text for error ORA-01804 in /var/www/html/db2.php on line 10 Warning: Trying to access array offset on false in /var/www/html/db2.php on line 14 Connection failed: My config is below: /etc/httpd/conf/httpd.conf SetEnv LD_LIBRARY_PATH /home/oracle/Avani/dbhome_1/lib SetEnv NLS_LANG American_America.UTF8 SetEnv PATH /usr/local/sbin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/home/oracle/Avani/dbhome_1/bin PassEnv LD_LIBRARY_PATH /etc/sysconfig/httpd export LD_LIBRARY_PATH=/home/oracle/Avani/dbhome_1/lib export ORACLE_HOME=/home/oracle/Avani/dbhome_1 export ORACLE_BASE=/home/oracle export PATH=/usr/local/sbin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/home/oracle/Avani/dbhome_1/bin ~/.bashrc # .bashrc # User specific aliases and functions export LD_LIBRARY_PATH=/home/oracle/Avani/dbhome_1/lib export NLS_LANG=American_America.UTF8 export TNS_ADMIN=/home/oracle/Avani/dbhome_1/network/admin export PATH=$PATH:$ORACLE_HOME/bin:$LD_LIBRARY_PATH:. alias rm='rm -i' alias cp='cp -i' alias mv='mv -i' # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi export LD_LIBRARY_PATH=/home/oracle/Avani/dbhome_1/lib export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:$PATH:$ORACLE_HOME/bin ~/.bash_profile #.bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi export ORACLE_HOME=/home/oracle/Avani/dbhome_1 export ORACLE_SID=orcl export LD_LIBRARY_PATH=/home/oracle/Avani/dbhome_1/lib export NLS_LANG=American_America.UTF8 export TNS_ADMIN=/home/oracle/Avani/dbhome_1/network/admin export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:$PATH:$ORACLE_HOME/bin:$PATH:. # User specific environment and startup programs export PATH /etc/systemd/system/httpd.service.d/httpd.conf [Service] Environment="LD_LIBRARY_PATH=/home/oracle/Avani/dbhome_1/lib" Environment="PATH=/usr/local/sbin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/home/oracle/Avani/dbhome_1/bin"
|
oci_connect(): OCIEnvNlsCreate() failed in RHEL 8 with HTTPD I need your help to connect to Oracle in PHP8 on my Red Hat 8.9 server. OCI8 is enabled and the HTTPD web server is installed. This is my code: <?php // Enable error reporting in PHP error_reporting(E_ALL); ini_set('display_errors', 1); // Connect to the Oracle database using oci_connect $conn = oci_connect("toakdbi", "toakdbi_123", "(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=AvaniServer)(PORT=1521)))(CONNE_DATA=(SID=orcl)))"); if (!$conn) { $e = oci_error(); echo "Connection failed: " . $e['message']; } else { echo "</br>Connected to Oracle!"; } ?> This is the error I got: Warning: oci_connect(): OCIEnvNlsCreate() failed. There is something wrong with your system - please check that LD_LIBRARY_PATH includes the directory with Oracle Instant Client libraries in /var/www/html/db2.php on line 10 Warning: oci_connect(): Error while trying to retrieve text for error ORA-01804 in /var/www/html/db2.php on line 10 Warning: Trying to access array offset on false in /var/www/html/db2.php on line 14 Connection failed: My config is below: /etc/httpd/conf/httpd.conf SetEnv LD_LIBRARY_PATH /home/oracle/Avani/dbhome_1/lib SetEnv NLS_LANG American_America.UTF8 SetEnv PATH /usr/local/sbin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/home/oracle/Avani/dbhome_1/bin PassEnv LD_LIBRARY_PATH /etc/sysconfig/httpd export LD_LIBRARY_PATH=/home/oracle/Avani/dbhome_1/lib export ORACLE_HOME=/home/oracle/Avani/dbhome_1 export ORACLE_BASE=/home/oracle export PATH=/usr/local/sbin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/home/oracle/Avani/dbhome_1/bin ~/.bashrc # .bashrc # User specific aliases and functions export LD_LIBRARY_PATH=/home/oracle/Avani/dbhome_1/lib export NLS_LANG=American_America.UTF8 export TNS_ADMIN=/home/oracle/Avani/dbhome_1/network/admin export PATH=$PATH:$ORACLE_HOME/bin:$LD_LIBRARY_PATH:. alias rm='rm -i' alias cp='cp -i' alias mv='mv -i' # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi export LD_LIBRARY_PATH=/home/oracle/Avani/dbhome_1/lib export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:$PATH:$ORACLE_HOME/bin ~/.bash_profile #.bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi export ORACLE_HOME=/home/oracle/Avani/dbhome_1 export ORACLE_SID=orcl export LD_LIBRARY_PATH=/home/oracle/Avani/dbhome_1/lib export NLS_LANG=American_America.UTF8 export TNS_ADMIN=/home/oracle/Avani/dbhome_1/network/admin export PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:$PATH:$ORACLE_HOME/bin:$PATH:. # User specific environment and startup programs export PATH /etc/systemd/system/httpd.service.d/httpd.conf [Service] Environment="LD_LIBRARY_PATH=/home/oracle/Avani/dbhome_1/lib" Environment="PATH=/usr/local/sbin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/home/oracle/Avani/dbhome_1/bin"
|
php, oracle-database, oracle12c, redhat, oracle19c
| 2
| 176
| 2
|
https://stackoverflow.com/questions/79026688/oci-connect-ocienvnlscreate-failed-in-rhel-8-with-httpd
|
77,528,611
|
Keycloak Runtime Quarkus Properties
|
I am working on a Keycloak user federation application that uses the user-storage-jpa from keycloak-quickstarts . I configured it to use a MySQL database as shown in the following quarkus.properties file: quarkus.datasource.custom-user-store.jdbc.transactions=xa quarkus.datasource.custom-user-store.db-kind=mysql quarkus.datasource.custom-user-store.username=username quarkus.datasource.custom-user-store.password=password quarkus.datasource.custom-user-store.jdbc.url=jdbc:mysql://localhost:3308/my_db quarkus.datasource.custom-user-store.health.enabled=true quarkus.datasource.keycloak-user-store.jdbc.transactions=xa quarkus.datasource.keycloak-user-store.db-kind=mysql quarkus.datasource.keycloak-user-store.username=username quarkus.datasource.keycloak-user-store.password=password quarkus.datasource.keycloak-user-store.jdbc.url=jdbc:mysql://localhost:3308/keycloak_db quarkus.datasource.keycloak-user-store.health.enabled=true However, I want to set the JDBC URL ( quarkus.datasource.custom-user-store.jdbc.url and quarkus.datasource.keycloak-user-store.jdbc.url ), along with the credentials, dynamically at runtime, while Keycloak is running, based on user input from the Keycloak admin console. The JDBC URL is retrieved from the Keycloak model as follows: String jdbcUrl = model.getConfig().getFirst(String.valueOf(ConfigProperties.JDBC_URL)); Here is the admin console where the user types in the database credentials as well as the JDBC URL. I understand that Quarkus reads the quarkus.properties file at build time. Therefore, I am not sure how I can override the JDBC URL at runtime, while Keycloak is running. Is there a way to create a datasource at runtime with the JDBC URL set dynamically? If so, how can I achieve this? I would appreciate any guidance or suggestions. Thanks in advance! I tried using the System.property() but it is not working because the involved properties are not set before starting Keycloak.
|
Keycloak Runtime Quarkus Properties I am working on a Keycloak user federation application that uses the user-storage-jpa from keycloak-quickstarts . I configured it to use a MySQL database as shown in the following quarkus.properties file: quarkus.datasource.custom-user-store.jdbc.transactions=xa quarkus.datasource.custom-user-store.db-kind=mysql quarkus.datasource.custom-user-store.username=username quarkus.datasource.custom-user-store.password=password quarkus.datasource.custom-user-store.jdbc.url=jdbc:mysql://localhost:3308/my_db quarkus.datasource.custom-user-store.health.enabled=true quarkus.datasource.keycloak-user-store.jdbc.transactions=xa quarkus.datasource.keycloak-user-store.db-kind=mysql quarkus.datasource.keycloak-user-store.username=username quarkus.datasource.keycloak-user-store.password=password quarkus.datasource.keycloak-user-store.jdbc.url=jdbc:mysql://localhost:3308/keycloak_db quarkus.datasource.keycloak-user-store.health.enabled=true However, I want to set the JDBC URL ( quarkus.datasource.custom-user-store.jdbc.url and quarkus.datasource.keycloak-user-store.jdbc.url ), along with the credentials, dynamically at runtime, while Keycloak is running, based on user input from the Keycloak admin console. The JDBC URL is retrieved from the Keycloak model as follows: String jdbcUrl = model.getConfig().getFirst(String.valueOf(ConfigProperties.JDBC_URL)); Here is the admin console where the user types in the database credentials as well as the JDBC URL. I understand that Quarkus reads the quarkus.properties file at build time. Therefore, I am not sure how I can override the JDBC URL at runtime, while Keycloak is running. Is there a way to create a datasource at runtime with the JDBC URL set dynamically? If so, how can I achieve this? I would appreciate any guidance or suggestions. Thanks in advance! I tried using the System.property() but it is not working because the involved properties are not set before starting Keycloak.
|
java, keycloak, quarkus, redhat, keycloak-spi
| 2
| 877
| 1
|
https://stackoverflow.com/questions/77528611/keycloak-runtime-quarkus-properties
|
76,637,717
|
Quarkus build analytics not disabled in version 3.2.0.Final
|
Are "Build analytics" ( [URL] ) still opt-in in Quarkus 3.2.0.Final ? It seems like our CI fails on Quarkus 3.2.0.Final projects because of build analytics: we are using maven helper plugin to read the project version number, and are getting as output (despite the options -q –DforceStdout): [warn] [Quarkus build analytics] Analytics remote config not received. java.util.concurrent.TimeoutException: (no message) In addition, I tried to disable them using the 2 solutions mentioned in the doc above: JSON File in home/.redhat/io.quarkus.analytics.localconfig Maven param -Dquarkus.analytics.disabled=true But I still have the same problem, so it seems like it is still trying to get a remote config? We didn't have this for previous versions of Quarkus, only for 3.2.0.
|
Quarkus build analytics not disabled in version 3.2.0.Final Are "Build analytics" ( [URL] ) still opt-in in Quarkus 3.2.0.Final ? It seems like our CI fails on Quarkus 3.2.0.Final projects because of build analytics: we are using maven helper plugin to read the project version number, and are getting as output (despite the options -q –DforceStdout): [warn] [Quarkus build analytics] Analytics remote config not received. java.util.concurrent.TimeoutException: (no message) In addition, I tried to disable them using the 2 solutions mentioned in the doc above: JSON File in home/.redhat/io.quarkus.analytics.localconfig Maven param -Dquarkus.analytics.disabled=true But I still have the same problem, so it seems like it is still trying to get a remote config? We didn't have this for previous versions of Quarkus, only for 3.2.0.
|
redhat, quarkus
| 2
| 1,010
| 1
|
https://stackoverflow.com/questions/76637717/quarkus-build-analytics-not-disabled-in-version-3-2-0-final
|
75,340,638
|
Selenium Python scripts: Chrome not reachable
|
I'm currently running a selenium python script on my EC2 machine. Google-chrome version (100.0.4896.60) Chrome driver (100.0.4896.60) Red hat (8.7) Python (3.6.8) Selenium (3.141.0) Pytest (7.0.1) from selenium import webdriver from selenium.webdriver.chrome.options import Options #argument to switch off suid sandBox and no sandBox in Chrome chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') chrome_options.add_argument("--remote-debugging-port=9222") # chrome_options.add_argument('--start-maximized') chrome_options.add_argument('--headless') #chrome_options.add_argument("--window-size=1920,1080") chrome_options.add_argument('--disable-gpu') chrome_options.add_argument('--disable-popup-blocking') chrome_options.add_argument("--incognito") userAgent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.60 Safari/537.36" chrome_options.add_argument(f'user-agent={userAgent}') chrome_options.add_argument('ignore-certificate-errors') driver = webdriver.Chrome("/usr/local/share/chromedriver", options=chrome_options) driver.get('[URL] print(driver.title) driver.quit() When I run the script, I get the following error. ================================================================================================================== ERRORS =================================================================================================================== _________________________________________________________________________________________________________ ERROR collecting main.py __________________________________________________________________________________________________________ main.py:21: in <module> driver = webdriver.Chrome("/usr/local/share/chromedriver", options=chrome_options) /usr/local/lib/python3.6/site-packages/selenium/webdriver/chrome/webdriver.py:81: in __init__ desired_capabilities=desired_capabilities) /usr/local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py:157: in __init__ self.start_session(capabilities, browser_profile) /usr/local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py:252: in start_session response = self.execute(Command.NEW_SESSION, parameters) /usr/local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py:321: in execute self.error_handler.check_response(response) /usr/local/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py:242: in check_response raise exception_class(message, screen, stacktrace) E selenium.common.exceptions.WebDriverException: Message: chrome not reachable ========================================================================================================== short test summary info ========================================================================================================== ERROR main.py - selenium.common.exceptions.WebDriverException: Message: chrome not reachable !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! There was an update to google-chrome on my red hat server that caused it to upgrade to version 109, I was told to downgrade version back to 100, so avoid any breaks to our scripts in the short term. Now the versions of google-chrome and chromedriver are matching but I'm still seeing error that chrome cannot be reached. I already ensured that chrome driver is at correct path as well as correct permissions. I set permissions on google chrome using "chmod +x chromedriver". Not sure what else to do. The expected outcome of this script is very simple, as this is not my main code but an example. When running any script on this machine, I get the same error.
|
Selenium Python scripts: Chrome not reachable I'm currently running a selenium python script on my EC2 machine. Google-chrome version (100.0.4896.60) Chrome driver (100.0.4896.60) Red hat (8.7) Python (3.6.8) Selenium (3.141.0) Pytest (7.0.1) from selenium import webdriver from selenium.webdriver.chrome.options import Options #argument to switch off suid sandBox and no sandBox in Chrome chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') chrome_options.add_argument("--remote-debugging-port=9222") # chrome_options.add_argument('--start-maximized') chrome_options.add_argument('--headless') #chrome_options.add_argument("--window-size=1920,1080") chrome_options.add_argument('--disable-gpu') chrome_options.add_argument('--disable-popup-blocking') chrome_options.add_argument("--incognito") userAgent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.60 Safari/537.36" chrome_options.add_argument(f'user-agent={userAgent}') chrome_options.add_argument('ignore-certificate-errors') driver = webdriver.Chrome("/usr/local/share/chromedriver", options=chrome_options) driver.get('[URL] print(driver.title) driver.quit() When I run the script, I get the following error. ================================================================================================================== ERRORS =================================================================================================================== _________________________________________________________________________________________________________ ERROR collecting main.py __________________________________________________________________________________________________________ main.py:21: in <module> driver = webdriver.Chrome("/usr/local/share/chromedriver", options=chrome_options) /usr/local/lib/python3.6/site-packages/selenium/webdriver/chrome/webdriver.py:81: in __init__ desired_capabilities=desired_capabilities) /usr/local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py:157: in __init__ self.start_session(capabilities, browser_profile) /usr/local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py:252: in start_session response = self.execute(Command.NEW_SESSION, parameters) /usr/local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py:321: in execute self.error_handler.check_response(response) /usr/local/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py:242: in check_response raise exception_class(message, screen, stacktrace) E selenium.common.exceptions.WebDriverException: Message: chrome not reachable ========================================================================================================== short test summary info ========================================================================================================== ERROR main.py - selenium.common.exceptions.WebDriverException: Message: chrome not reachable !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! There was an update to google-chrome on my red hat server that caused it to upgrade to version 109, I was told to downgrade version back to 100, so avoid any breaks to our scripts in the short term. Now the versions of google-chrome and chromedriver are matching but I'm still seeing error that chrome cannot be reached. I already ensured that chrome driver is at correct path as well as correct permissions. I set permissions on google chrome using "chmod +x chromedriver". Not sure what else to do. The expected outcome of this script is very simple, as this is not my main code but an example. When running any script on this machine, I get the same error.
|
python, selenium, selenium-chromedriver, pytest, redhat
| 2
| 1,792
| 1
|
https://stackoverflow.com/questions/75340638/selenium-python-scripts-chrome-not-reachable
|
73,987,311
|
Objdump to tell if binary was built on Redhat or Suse
|
Is there a switch in objdump or readelf which can tell if an ELF binary was built on Redhat or SUSE? I only have binary and no source code. Is there any other way (like strings command or nm) that could I could use if objdump/readelf isn't useful.
|
Objdump to tell if binary was built on Redhat or Suse Is there a switch in objdump or readelf which can tell if an ELF binary was built on Redhat or SUSE? I only have binary and no source code. Is there any other way (like strings command or nm) that could I could use if objdump/readelf isn't useful.
|
redhat, suse, objdump, readelf
| 2
| 258
| 1
|
https://stackoverflow.com/questions/73987311/objdump-to-tell-if-binary-was-built-on-redhat-or-suse
|
72,458,622
|
Vertx get event loop msg number
|
I'm trying to find a way to get the current number of message that I have in the event loop. I want to put some logs to know if the message are getting enqueue. I have this Iterator, but I dont think it is since are eventExecutor Context context = vertx.getOrCreateContext(); ContextInternal contextInt = (ContextInternal) context; EventLoop eventLoop = contextInt.nettyEventLoop(); Regards.
|
Vertx get event loop msg number I'm trying to find a way to get the current number of message that I have in the event loop. I want to put some logs to know if the message are getting enqueue. I have this Iterator, but I dont think it is since are eventExecutor Context context = vertx.getOrCreateContext(); ContextInternal contextInt = (ContextInternal) context; EventLoop eventLoop = contextInt.nettyEventLoop(); Regards.
|
java, redhat, vert.x
| 2
| 226
| 1
|
https://stackoverflow.com/questions/72458622/vertx-get-event-loop-msg-number
|
72,437,163
|
Create Directory, download file and execute command from list of URL
|
I am working on a Red Hat Linux server. My end goal is to run CRB-BLAST on multiple fasta files and have the results from those in separate directories. My approach is to download the fasta files using wget then run the CRB-BLAST. I have multiple files and would like to be able to download them each to their own directory (the name perhaps should come from the URL list files), then run the CRB-BLAST. Example URLs: [URL] [URL] [URL] [URL] [URL] [URL] [URL] Ideally, the file name determines the directory name, for example, TC_3370/ . I think there might be a solution with cat URL.txt | mkdir | cd | wget | crb-blast Currently I just run the commands in line: mkdir TC_3370 cd TC_3370/ wget url [URL] crb-blast -q TC_3370_chr.v1.0.maker.CDS.fasta.gz -t TCV2_annot_cds.fna -e 1e-20 -h 4 -o rbbh_TC
|
Create Directory, download file and execute command from list of URL I am working on a Red Hat Linux server. My end goal is to run CRB-BLAST on multiple fasta files and have the results from those in separate directories. My approach is to download the fasta files using wget then run the CRB-BLAST. I have multiple files and would like to be able to download them each to their own directory (the name perhaps should come from the URL list files), then run the CRB-BLAST. Example URLs: [URL] [URL] [URL] [URL] [URL] [URL] [URL] Ideally, the file name determines the directory name, for example, TC_3370/ . I think there might be a solution with cat URL.txt | mkdir | cd | wget | crb-blast Currently I just run the commands in line: mkdir TC_3370 cd TC_3370/ wget url [URL] crb-blast -q TC_3370_chr.v1.0.maker.CDS.fasta.gz -t TCV2_annot_cds.fna -e 1e-20 -h 4 -o rbbh_TC
|
linux, bash, wget, redhat
| 2
| 535
| 3
|
https://stackoverflow.com/questions/72437163/create-directory-download-file-and-execute-command-from-list-of-url
|
68,284,352
|
OC Rollout with specific tag name
|
I have created Tag Version using oc tag command in CLI, i have to deploy the image with recently created tag version. Ex. oc tag <Source Service Name>:latest <Destination Service Name>:Rel10.0. I have created Tag Name as Rel10.0, i have to select this Tag name and deploy it using CLI. What is OC Command i have to use? I have tried with oc rollout latest but this command does only deploy. I have to select that specific tag name and deploy the image. oc rollout latest <image name> I know after doing oc tag in CLI, we can check in Openshift UI for that specific tag name select and deploy it. I dont want to use Openshift i want to use CLI and do this job.
|
OC Rollout with specific tag name I have created Tag Version using oc tag command in CLI, i have to deploy the image with recently created tag version. Ex. oc tag <Source Service Name>:latest <Destination Service Name>:Rel10.0. I have created Tag Name as Rel10.0, i have to select this Tag name and deploy it using CLI. What is OC Command i have to use? I have tried with oc rollout latest but this command does only deploy. I have to select that specific tag name and deploy the image. oc rollout latest <image name> I know after doing oc tag in CLI, we can check in Openshift UI for that specific tag name select and deploy it. I dont want to use Openshift i want to use CLI and do this job.
|
openshift, redhat
| 2
| 1,426
| 1
|
https://stackoverflow.com/questions/68284352/oc-rollout-with-specific-tag-name
|
67,738,422
|
Check the Total, used and available Hard Drive Disk space in RED HAT Enterprise Linux 7.6
|
Good afternoon !! you fine and healthy? I hope so! Guys, I've been looking for a command to RED HAT Enterprise Linux that can show me in the cleanest way possible: disk-space: total used available What I've tried so far? fdisk -l | grep Disk -> It is asking for admin rights and I don't have them. Also: Command: dmesg | grep blocks I've been looking i several websites and forums but this is the cleanest thing i found out there: [ 2.070965] sd 0:0:0:0: [sda] 125829120 512-byte logical blocks: (64.4 GB/60.0 GiB) [ 2.071017] sd 0:0:1:0: [sdb] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB) [ 2.071099] sd 0:0:2:0: [sdc] 1069547520 512-byte logical blocks: (547 GB/510 GiB) And this only show me the partitions and their total size. Doesn't show the used and the available I also found this command: # btrfs fi df /data/ # btrfs fi df -h /data/ OUTPUT: Data, RAID1: total=71.00GiB, used=63.40GiB System, RAID1: total=8.00MiB, used=16.00KiB Metadata, RAID1: total=4.00GiB, used=2.29GiB GlobalReserve, single: total=512.00MiB, used=0.00B But they don't work for me, i don't know if i'm missing something Please see more: [[URL] SERVER DETAILS: NAME="Red Hat Enterprise Linux Server" VERSION="7.6 (Maipo)" ID="rhel" ID_LIKE="fedora" VARIANT="Server" VARIANT_ID="server" VERSION_ID="7.6" PRETTY_NAME="Red Hat Enterprise Linux" ANSI_COLOR="0;31" Thank's in advance for any help.
|
Check the Total, used and available Hard Drive Disk space in RED HAT Enterprise Linux 7.6 Good afternoon !! you fine and healthy? I hope so! Guys, I've been looking for a command to RED HAT Enterprise Linux that can show me in the cleanest way possible: disk-space: total used available What I've tried so far? fdisk -l | grep Disk -> It is asking for admin rights and I don't have them. Also: Command: dmesg | grep blocks I've been looking i several websites and forums but this is the cleanest thing i found out there: [ 2.070965] sd 0:0:0:0: [sda] 125829120 512-byte logical blocks: (64.4 GB/60.0 GiB) [ 2.071017] sd 0:0:1:0: [sdb] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB) [ 2.071099] sd 0:0:2:0: [sdc] 1069547520 512-byte logical blocks: (547 GB/510 GiB) And this only show me the partitions and their total size. Doesn't show the used and the available I also found this command: # btrfs fi df /data/ # btrfs fi df -h /data/ OUTPUT: Data, RAID1: total=71.00GiB, used=63.40GiB System, RAID1: total=8.00MiB, used=16.00KiB Metadata, RAID1: total=4.00GiB, used=2.29GiB GlobalReserve, single: total=512.00MiB, used=0.00B But they don't work for me, i don't know if i'm missing something Please see more: [[URL] SERVER DETAILS: NAME="Red Hat Enterprise Linux Server" VERSION="7.6 (Maipo)" ID="rhel" ID_LIKE="fedora" VARIANT="Server" VARIANT_ID="server" VERSION_ID="7.6" PRETTY_NAME="Red Hat Enterprise Linux" ANSI_COLOR="0;31" Thank's in advance for any help.
|
linux, server, command, redhat
| 2
| 2,788
| 1
|
https://stackoverflow.com/questions/67738422/check-the-total-used-and-available-hard-drive-disk-space-in-red-hat-enterprise
|
66,756,673
|
Is there a RESTful Interface for Executing DRL Business Rules via Red Hat's Process Automation Manager / KIE Decision Server?
|
I'm trying to set up a few basic "hello world" business rules using Red Hat's Process Automation Manager (7.10.0). There's a few ways to do this - DMN, Guided Decision Tables, Spreadsheets, DRL (Drools), etc. I'm mostly interested in evaluating "raw rules" rather than setting-up a "process" or making "decisions". For example, validating the format of a coordinate pair (latitude and longitude). As such, I'm opting for DRL rule definition for my initial use case. Question : Once I define a DRL business rule, is there a way to test it via the Swagger UI RESTful service deployed with the KIE Server? This is easy enough to do with DMN or Guided Decision Tables, but all of the documentation surrounding execution of DRL rules requires writing a Client (like Java or Maven).
|
Is there a RESTful Interface for Executing DRL Business Rules via Red Hat's Process Automation Manager / KIE Decision Server? I'm trying to set up a few basic "hello world" business rules using Red Hat's Process Automation Manager (7.10.0). There's a few ways to do this - DMN, Guided Decision Tables, Spreadsheets, DRL (Drools), etc. I'm mostly interested in evaluating "raw rules" rather than setting-up a "process" or making "decisions". For example, validating the format of a coordinate pair (latitude and longitude). As such, I'm opting for DRL rule definition for my initial use case. Question : Once I define a DRL business rule, is there a way to test it via the Swagger UI RESTful service deployed with the KIE Server? This is easy enough to do with DMN or Guided Decision Tables, but all of the documentation surrounding execution of DRL rules requires writing a Client (like Java or Maven).
|
drools, redhat, kie, redhat-brms, redhat-bpm
| 2
| 539
| 1
|
https://stackoverflow.com/questions/66756673/is-there-a-restful-interface-for-executing-drl-business-rules-via-red-hats-proc
|
65,854,465
|
OKD 4.5 single node installation
|
I'm trying to build an OKD 4.5 single node cluster following Craig Robinson blog post (at [URL] ). I faced with this issue first on bootstrap node, but after deleting and recreating the whole process again, it booted up successfully. But the same issue happened again while preparing control plane master node. After initial coreos download (which proves webserver is working fine), I get this recurring GET error message over and over again: ignition[xxx]: GET error: Get "[URL] EOF And this is my control plane node config: ip=10.106.31.233::10.106.31.1:255.255.255.0:::none nameserver=10.106.31.231 coreos.inst.install_dev=/dev/sda coreos.inst.image_url=[URL] fcos.raw.xz coreos.inst.ignition_url=[URL] IPs are: okd-services: 10.106.31.231 ; bootstrap: 10.106.31.232 ; control-plane: 10.106.31.233 I can reach the [URL] address from remote pc and list the contents including master.ign file. Also pinging "api-int.lab.okd.local" is successful too. firewalld open ports on okd-services node are: [root@okd4-services ~]# ss -ltu Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process udp UNCONN 0 0 0.0.0.0:hostmon 0.0.0.0:* udp UNCONN 0 0 10.106.31.231:domain 0.0.0.0:* udp UNCONN 0 0 127.0.0.1:domain 0.0.0.0:* udp UNCONN 0 0 127.0.0.53%lo:domain 0.0.0.0:* udp UNCONN 0 0 [::]:hostmon [::]:* udp UNCONN 0 0 [::]:domain [::]:* tcp LISTEN 0 128 0.0.0.0:ssh 0.0.0.0:* tcp LISTEN 0 4096 127.0.0.1:rndc 0.0.0.0:* tcp LISTEN 0 4096 0.0.0.0:https 0.0.0.0:* tcp LISTEN 0 4096 0.0.0.0:22623 0.0.0.0:* tcp LISTEN 0 4096 0.0.0.0:cslistener 0.0.0.0:* tcp LISTEN 0 4096 0.0.0.0:sun-sr-https 0.0.0.0:* tcp LISTEN 0 4096 0.0.0.0:hostmon 0.0.0.0:* tcp LISTEN 0 4096 0.0.0.0:http 0.0.0.0:* tcp LISTEN 0 10 10.106.31.231:domain 0.0.0.0:* tcp LISTEN 0 10 127.0.0.1:domain 0.0.0.0:* tcp LISTEN 0 4096 127.0.0.53%lo:domain 0.0.0.0:* tcp LISTEN 0 128 [::]:ssh [::]:* tcp LISTEN 0 4096 [::1]:rndc [::]:* tcp LISTEN 0 4096 [::]:hostmon [::]:* tcp LISTEN 0 511 *:webcache *:* tcp LISTEN 0 10 [::]:domain [::]:* the output of the dig test on okd-services node is: [root@okd4-services ~]# dig -x 10.106.31.231 ; <<>> DiG 9.11.25-RedHat-9.11.25-2.fc33 <<>> -x 10.106.31.231 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60620 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 65494 ;; QUESTION SECTION: ;231.31.106.10.in-addr.arpa. IN PTR ;; ANSWER SECTION: 231.31.106.10.in-addr.arpa. 604800 IN PTR api-int.lab.okd.local. 231.31.106.10.in-addr.arpa. 604800 IN PTR api.lab.okd.local. 231.31.106.10.in-addr.arpa. 604800 IN PTR okd4-services.okd.local. ;; SERVER: 127.0.0.53#53(127.0.0.53) I deleted and recreated the control plane to see if it solved the issue, but was not successful. Any idea what this issue means?
|
OKD 4.5 single node installation I'm trying to build an OKD 4.5 single node cluster following Craig Robinson blog post (at [URL] ). I faced with this issue first on bootstrap node, but after deleting and recreating the whole process again, it booted up successfully. But the same issue happened again while preparing control plane master node. After initial coreos download (which proves webserver is working fine), I get this recurring GET error message over and over again: ignition[xxx]: GET error: Get "[URL] EOF And this is my control plane node config: ip=10.106.31.233::10.106.31.1:255.255.255.0:::none nameserver=10.106.31.231 coreos.inst.install_dev=/dev/sda coreos.inst.image_url=[URL] fcos.raw.xz coreos.inst.ignition_url=[URL] IPs are: okd-services: 10.106.31.231 ; bootstrap: 10.106.31.232 ; control-plane: 10.106.31.233 I can reach the [URL] address from remote pc and list the contents including master.ign file. Also pinging "api-int.lab.okd.local" is successful too. firewalld open ports on okd-services node are: [root@okd4-services ~]# ss -ltu Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process udp UNCONN 0 0 0.0.0.0:hostmon 0.0.0.0:* udp UNCONN 0 0 10.106.31.231:domain 0.0.0.0:* udp UNCONN 0 0 127.0.0.1:domain 0.0.0.0:* udp UNCONN 0 0 127.0.0.53%lo:domain 0.0.0.0:* udp UNCONN 0 0 [::]:hostmon [::]:* udp UNCONN 0 0 [::]:domain [::]:* tcp LISTEN 0 128 0.0.0.0:ssh 0.0.0.0:* tcp LISTEN 0 4096 127.0.0.1:rndc 0.0.0.0:* tcp LISTEN 0 4096 0.0.0.0:https 0.0.0.0:* tcp LISTEN 0 4096 0.0.0.0:22623 0.0.0.0:* tcp LISTEN 0 4096 0.0.0.0:cslistener 0.0.0.0:* tcp LISTEN 0 4096 0.0.0.0:sun-sr-https 0.0.0.0:* tcp LISTEN 0 4096 0.0.0.0:hostmon 0.0.0.0:* tcp LISTEN 0 4096 0.0.0.0:http 0.0.0.0:* tcp LISTEN 0 10 10.106.31.231:domain 0.0.0.0:* tcp LISTEN 0 10 127.0.0.1:domain 0.0.0.0:* tcp LISTEN 0 4096 127.0.0.53%lo:domain 0.0.0.0:* tcp LISTEN 0 128 [::]:ssh [::]:* tcp LISTEN 0 4096 [::1]:rndc [::]:* tcp LISTEN 0 4096 [::]:hostmon [::]:* tcp LISTEN 0 511 *:webcache *:* tcp LISTEN 0 10 [::]:domain [::]:* the output of the dig test on okd-services node is: [root@okd4-services ~]# dig -x 10.106.31.231 ; <<>> DiG 9.11.25-RedHat-9.11.25-2.fc33 <<>> -x 10.106.31.231 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60620 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 65494 ;; QUESTION SECTION: ;231.31.106.10.in-addr.arpa. IN PTR ;; ANSWER SECTION: 231.31.106.10.in-addr.arpa. 604800 IN PTR api-int.lab.okd.local. 231.31.106.10.in-addr.arpa. 604800 IN PTR api.lab.okd.local. 231.31.106.10.in-addr.arpa. 604800 IN PTR okd4-services.okd.local. ;; SERVER: 127.0.0.53#53(127.0.0.53) I deleted and recreated the control plane to see if it solved the issue, but was not successful. Any idea what this issue means?
|
openshift, redhat, openshift-origin, okd, redhat-containers
| 2
| 1,787
| 1
|
https://stackoverflow.com/questions/65854465/okd-4-5-single-node-installation
|
64,672,116
|
How to download OpenJDK for Windows from Red Hat
|
I need a JDK for VSCode Java Extension Pack. It wants the Red Hat OpenJDK v11 or later. Despite having installed Oracle JDK v8, 11.0.5, 13.0.1, and 15.0.1 it still wants the one from Red Hat. I'm using Windows. When going to the download location [URL] I'm greeted with "Start today with Red Hat's implementation of OpenJDK— a free and open source implementation of the Java Platform, Standard Edition (Java SE)"; emphasis is mine. Upon clicking on a link to download the "jdk-11.0.8-x64 MSI" installer for Windows I have to log in and then provide a whole lot of personal details including employer details, private address, and accepting an Enterprise Agreement, which mentions some fees . Is it possible to download this "Free and open source" OpenJDK under reasonable terms? I won't be blindly accepting an agreement worded like that, because I don't want to sign any Enterprise Agreement with Red Hat as a casual independent developer. Even Oracle doesn't require me to log in to be able to download JDK. If only their SDK worked... Please help. Thanks in advance.
|
How to download OpenJDK for Windows from Red Hat I need a JDK for VSCode Java Extension Pack. It wants the Red Hat OpenJDK v11 or later. Despite having installed Oracle JDK v8, 11.0.5, 13.0.1, and 15.0.1 it still wants the one from Red Hat. I'm using Windows. When going to the download location [URL] I'm greeted with "Start today with Red Hat's implementation of OpenJDK— a free and open source implementation of the Java Platform, Standard Edition (Java SE)"; emphasis is mine. Upon clicking on a link to download the "jdk-11.0.8-x64 MSI" installer for Windows I have to log in and then provide a whole lot of personal details including employer details, private address, and accepting an Enterprise Agreement, which mentions some fees . Is it possible to download this "Free and open source" OpenJDK under reasonable terms? I won't be blindly accepting an agreement worded like that, because I don't want to sign any Enterprise Agreement with Red Hat as a casual independent developer. Even Oracle doesn't require me to log in to be able to download JDK. If only their SDK worked... Please help. Thanks in advance.
|
java, visual-studio-code, download, redhat
| 2
| 6,593
| 1
|
https://stackoverflow.com/questions/64672116/how-to-download-openjdk-for-windows-from-red-hat
|
64,050,102
|
Ansible - Unarchive specifics files from war archive
|
I would like to extract two files from directory from war archive. The two files exists in /WEB-INF/classes/ I have tried: - name: create application.properties in /tmp unarchive: src: "/application/webapps/application.war" dest: "/tmp" remote_src: yes extra_opts: - -j - WEB-INF/classes/application.properties - WEB-INF/classes/logback.xml Error : "err":"unzip: cannot find or open /WEB-INF/classes/application.properties" But it doesn't work of course. Any idea?
|
Ansible - Unarchive specifics files from war archive I would like to extract two files from directory from war archive. The two files exists in /WEB-INF/classes/ I have tried: - name: create application.properties in /tmp unarchive: src: "/application/webapps/application.war" dest: "/tmp" remote_src: yes extra_opts: - -j - WEB-INF/classes/application.properties - WEB-INF/classes/logback.xml Error : "err":"unzip: cannot find or open /WEB-INF/classes/application.properties" But it doesn't work of course. Any idea?
|
spring-boot, ansible, redhat
| 2
| 3,061
| 2
|
https://stackoverflow.com/questions/64050102/ansible-unarchive-specifics-files-from-war-archive
|
63,728,277
|
Extract names from PIV smartcard
|
I am trying to extract the following from a PIV Smartcard: Subject Common Name Certificate Subject Alt Name / Microsoft Principal Name I am using RedHat 6 (eventually 7) and CoolKey as my PKCS11 module. I need a way to extract this information via code without requiring the smartcard pin, be it from shell commands or a smartcard library. Currently I can get the Common Name by using the shell command 'pkcs11-tools --module -T' so the Subject Alt Name is truly what I am after, but I would like to find a better way to get the Common Name if available. I know this information is available without entering the pin as I can view it all in the included Smartcard Manager on RHEL (esc). I have a certificate chain of root, intermediate, and subordinate if that matters. My thoughts are I have to extract the certificate from the card, verify that certificate with my local CAs, and then decrypt it. I have spent days reading documentation on APDUs, smartcards, and openssl and have gotten nowhere. edit view of RHEL smart card manager: This is what the smart card viewer shows when you open the card and view the details. The Microsoft Principal Name is what I'm looking to extract from the card, as well as the "common name" which is displayed in the Hierarchy portion as well as other spots, shown by the red text. I actually have since switched to using pkcs15-tool, as pkcs11-tool cutoff longer common names (you can see this in the title bar of the screenshot, same issue). Output of: 'pkcs15-tool --list-info' Using reader with a card: <reader name> PKCS#15 Card [LASTNAME.FIRSTNAME.MIDDLENAME.12345678]: Version : 0 Serial number : <big string> Manufacturer ID : piv_II Flags : My current method is simply parsing the string in brackets as the common name and having users enter the Alt Name manually using the Redhat smartcard tool.
|
Extract names from PIV smartcard I am trying to extract the following from a PIV Smartcard: Subject Common Name Certificate Subject Alt Name / Microsoft Principal Name I am using RedHat 6 (eventually 7) and CoolKey as my PKCS11 module. I need a way to extract this information via code without requiring the smartcard pin, be it from shell commands or a smartcard library. Currently I can get the Common Name by using the shell command 'pkcs11-tools --module -T' so the Subject Alt Name is truly what I am after, but I would like to find a better way to get the Common Name if available. I know this information is available without entering the pin as I can view it all in the included Smartcard Manager on RHEL (esc). I have a certificate chain of root, intermediate, and subordinate if that matters. My thoughts are I have to extract the certificate from the card, verify that certificate with my local CAs, and then decrypt it. I have spent days reading documentation on APDUs, smartcards, and openssl and have gotten nowhere. edit view of RHEL smart card manager: This is what the smart card viewer shows when you open the card and view the details. The Microsoft Principal Name is what I'm looking to extract from the card, as well as the "common name" which is displayed in the Hierarchy portion as well as other spots, shown by the red text. I actually have since switched to using pkcs15-tool, as pkcs11-tool cutoff longer common names (you can see this in the title bar of the screenshot, same issue). Output of: 'pkcs15-tool --list-info' Using reader with a card: <reader name> PKCS#15 Card [LASTNAME.FIRSTNAME.MIDDLENAME.12345678]: Version : 0 Serial number : <big string> Manufacturer ID : piv_II Flags : My current method is simply parsing the string in brackets as the common name and having users enter the Alt Name manually using the Redhat smartcard tool.
|
openssl, redhat, smartcard, apdu, pkcs#11
| 2
| 1,141
| 1
|
https://stackoverflow.com/questions/63728277/extract-names-from-piv-smartcard
|
63,038,515
|
In Openshift, how can I create a new build with an environment variable that's value is a secret using the CLI?
|
I have the following command. oc new-build gen-dev/genbuilder:latest~ssh://git@mycompany.net:7999/gen/pfs-converter.git#DEV1 \ --source-secret='privatekey' \ --name='testbuild' \ --env=KEY=VALUE I would like to set the environment variables to have some secret values because the build will fail without them and I need to do it before this command takes places because new build immediately builds a new container.
|
In Openshift, how can I create a new build with an environment variable that's value is a secret using the CLI? I have the following command. oc new-build gen-dev/genbuilder:latest~ssh://git@mycompany.net:7999/gen/pfs-converter.git#DEV1 \ --source-secret='privatekey' \ --name='testbuild' \ --env=KEY=VALUE I would like to set the environment variables to have some secret values because the build will fail without them and I need to do it before this command takes places because new build immediately builds a new container.
|
docker, kubernetes, openshift, redhat
| 2
| 1,843
| 1
|
https://stackoverflow.com/questions/63038515/in-openshift-how-can-i-create-a-new-build-with-an-environment-variable-thats-v
|
60,882,335
|
srun: error: Slurm controller not responding, sleeping and retrying
|
Running the following command in Slurm: $ srun -J FRD_gpu --partition=gpu --gres=gpu:1 --time=0-02:59:00 --mem=2000 --ntasks=1 --cpus-per-task=1 --pty /bin/bash -i Returns the following error: srun: error: Slurm controller not responding, sleeping and retrying. The Slurm controller seems to be up: $ scontrol ping Slurmctld(primary) at narvi-install is UP Any idea why and how to resolve this? $ scontrol -V slurm 18.08.8 System info: gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) $ sinfo PARTITION AVAIL TIMELIMIT NODES STATE NODELIST normal up 7-00:00:00 1 drain* me99 normal up 7-00:00:00 3 down* me[64-65,97] normal up 7-00:00:00 1 drain me89 normal up 7-00:00:00 23 mix me[55,67,86,88,90-94,96,98,100-101],na[27,41-42,44-45,47-49,51-52] normal up 7-00:00:00 84 alloc me[56-63,66,68-74,76-81,83-85,87,95,102,153-158],na[01-26,28-40,43,46,50,53-60] normal up 7-00:00:00 3 idle me[82,151-152] test* up 4:00:00 1 drain* me99 test* up 4:00:00 3 down* me[64-65,97] test* up 4:00:00 2 drain me[04,89] test* up 4:00:00 27 mix me[55,67,86,88,90-94,96,98,100-101,248,260],meg[11-12],na[27,41-42,44-45,47-49,51-52] test* up 4:00:00 130 alloc me[56-63,66,68-74,76-81,83-85,87,95,102,153-158,233-247,249-259,261-280],na[01-26,28-40,43,46,50,53-60] test* up 4:00:00 14 idle me[01-03,50-54,82,151-152],meg10,nag[01,14] grid up 7-00:00:00 10 mix na[27,41-42,44-45,47-49,51-52] grid up 7-00:00:00 42 alloc na[01-26,28-32,43,46,50,53-60] gpu up 7-00:00:00 15 mix meg[11-12],nag[02-10,12-13,16-17] gpu up 7-00:00:00 4 idle meg10,nag[01,11,15]
|
srun: error: Slurm controller not responding, sleeping and retrying Running the following command in Slurm: $ srun -J FRD_gpu --partition=gpu --gres=gpu:1 --time=0-02:59:00 --mem=2000 --ntasks=1 --cpus-per-task=1 --pty /bin/bash -i Returns the following error: srun: error: Slurm controller not responding, sleeping and retrying. The Slurm controller seems to be up: $ scontrol ping Slurmctld(primary) at narvi-install is UP Any idea why and how to resolve this? $ scontrol -V slurm 18.08.8 System info: gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) $ sinfo PARTITION AVAIL TIMELIMIT NODES STATE NODELIST normal up 7-00:00:00 1 drain* me99 normal up 7-00:00:00 3 down* me[64-65,97] normal up 7-00:00:00 1 drain me89 normal up 7-00:00:00 23 mix me[55,67,86,88,90-94,96,98,100-101],na[27,41-42,44-45,47-49,51-52] normal up 7-00:00:00 84 alloc me[56-63,66,68-74,76-81,83-85,87,95,102,153-158],na[01-26,28-40,43,46,50,53-60] normal up 7-00:00:00 3 idle me[82,151-152] test* up 4:00:00 1 drain* me99 test* up 4:00:00 3 down* me[64-65,97] test* up 4:00:00 2 drain me[04,89] test* up 4:00:00 27 mix me[55,67,86,88,90-94,96,98,100-101,248,260],meg[11-12],na[27,41-42,44-45,47-49,51-52] test* up 4:00:00 130 alloc me[56-63,66,68-74,76-81,83-85,87,95,102,153-158,233-247,249-259,261-280],na[01-26,28-40,43,46,50,53-60] test* up 4:00:00 14 idle me[01-03,50-54,82,151-152],meg10,nag[01,14] grid up 7-00:00:00 10 mix na[27,41-42,44-45,47-49,51-52] grid up 7-00:00:00 42 alloc na[01-26,28-32,43,46,50,53-60] gpu up 7-00:00:00 15 mix meg[11-12],nag[02-10,12-13,16-17] gpu up 7-00:00:00 4 idle meg10,nag[01,11,15]
|
ssh, error-handling, gpu, redhat, slurm
| 2
| 2,062
| 1
|
https://stackoverflow.com/questions/60882335/srun-error-slurm-controller-not-responding-sleeping-and-retrying
|
60,385,105
|
I am unable to install PostgreSQL 12 on Red Hat 8.1 after following official instructions using dnf
|
Official Instructions I have been trying to install PostgreSQL 12 but I get the following error: No match for argument: postgresql12 Error: Unable to find a match: postgresql12 When I run the command sudo dnf repolist this is what I get: $ sudo dnf repolist Updating Subscription Management repositories. This system is registered to Red Hat Subscription Management, but is not receiving updates. You can use subscription-manager to assign subscriptions. Last metadata expiration check: 0:04:00 ago on Mon 24 Feb 2020 05:44:02 PM EST. Modular dependency problems: Problem 1: conflicting requests - nothing provides module(perl:5.26) needed by module perl-DBD-SQLite:1.58:8010020190322125518:073fa5fe-0.x86_64 Problem 2: conflicting requests - nothing provides module(perl:5.26) needed by module perl-DBI:1.641:8010020190322130042:16b3ab4d-0.x86_64 repo id repo name status *epel Extra Packages for Enterprise Linux 8 - x86_64 4,885 *epel-modular Extra Packages for Enterprise Linux Modular 8 - x86_64 0 pgdg10 PostgreSQL 10 for RHEL/CentOS 8 - x86_64 793 pgdg11 PostgreSQL 11 for RHEL/CentOS 8 - x86_64 838 pgdg12 PostgreSQL 12 for RHEL/CentOS 8 - x86_64 635 pgdg94 PostgreSQL 9.4 for RHEL/CentOS 8 - x86_64 346 pgdg95 PostgreSQL 9.5 for RHEL/CentOS 8 - x86_64 516 pgdg96 PostgreSQL 9.6 for RHEL/CentOS 8 - x86_64 761 When I run the command sudo yum list postgresql12* here is what I get: sudo yum list postgresql12* Updating Subscription Management repositories. This system is registered to Red Hat Subscription Management, but is not receiving updates. You can use subscription-manager to assign subscriptions. Last metadata expiration check: 0:06:49 ago on Mon 24 Feb 2020 05:44:02 PM EST. Modular dependency problems: Problem 1: conflicting requests - nothing provides module(perl:5.26) needed by module perl-DBD-SQLite:1.58:8010020190322125518:073fa5fe-0.x86_64 Problem 2: conflicting requests - nothing provides module(perl:5.26) needed by module perl-DBI:1.641:8010020190322130042:16b3ab4d-0.x86_64 Available Packages postgresql12-contrib-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-debugsource.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-devel.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-devel-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-libs.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-libs-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-llvmjit.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-llvmjit-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-odbc.x86_64 12.01.0000-1PGDG.rhel8 pgdg12 postgresql12-plperl-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-plpython.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-plpython-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-plpython3-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-pltcl-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-server-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-test-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 Nothing happens when I try to disable the postgresql module: sudo dnf -qy module disable postgresql Error: Problems in request: Modular dependency problems: Problem 1: conflicting requests - nothing provides module(perl:5.26) needed by module perl-DBD-SQLite:1.58:8010020190322125518:073fa5fe-0.x86_64 Problem 2: conflicting requests - nothing provides module(perl:5.26) needed by module perl-DBI:1.641:8010020190322130042:16b3ab4d-0.x86_64
|
I am unable to install PostgreSQL 12 on Red Hat 8.1 after following official instructions using dnf Official Instructions I have been trying to install PostgreSQL 12 but I get the following error: No match for argument: postgresql12 Error: Unable to find a match: postgresql12 When I run the command sudo dnf repolist this is what I get: $ sudo dnf repolist Updating Subscription Management repositories. This system is registered to Red Hat Subscription Management, but is not receiving updates. You can use subscription-manager to assign subscriptions. Last metadata expiration check: 0:04:00 ago on Mon 24 Feb 2020 05:44:02 PM EST. Modular dependency problems: Problem 1: conflicting requests - nothing provides module(perl:5.26) needed by module perl-DBD-SQLite:1.58:8010020190322125518:073fa5fe-0.x86_64 Problem 2: conflicting requests - nothing provides module(perl:5.26) needed by module perl-DBI:1.641:8010020190322130042:16b3ab4d-0.x86_64 repo id repo name status *epel Extra Packages for Enterprise Linux 8 - x86_64 4,885 *epel-modular Extra Packages for Enterprise Linux Modular 8 - x86_64 0 pgdg10 PostgreSQL 10 for RHEL/CentOS 8 - x86_64 793 pgdg11 PostgreSQL 11 for RHEL/CentOS 8 - x86_64 838 pgdg12 PostgreSQL 12 for RHEL/CentOS 8 - x86_64 635 pgdg94 PostgreSQL 9.4 for RHEL/CentOS 8 - x86_64 346 pgdg95 PostgreSQL 9.5 for RHEL/CentOS 8 - x86_64 516 pgdg96 PostgreSQL 9.6 for RHEL/CentOS 8 - x86_64 761 When I run the command sudo yum list postgresql12* here is what I get: sudo yum list postgresql12* Updating Subscription Management repositories. This system is registered to Red Hat Subscription Management, but is not receiving updates. You can use subscription-manager to assign subscriptions. Last metadata expiration check: 0:06:49 ago on Mon 24 Feb 2020 05:44:02 PM EST. Modular dependency problems: Problem 1: conflicting requests - nothing provides module(perl:5.26) needed by module perl-DBD-SQLite:1.58:8010020190322125518:073fa5fe-0.x86_64 Problem 2: conflicting requests - nothing provides module(perl:5.26) needed by module perl-DBI:1.641:8010020190322130042:16b3ab4d-0.x86_64 Available Packages postgresql12-contrib-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-debugsource.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-devel.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-devel-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-libs.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-libs-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-llvmjit.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-llvmjit-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-odbc.x86_64 12.01.0000-1PGDG.rhel8 pgdg12 postgresql12-plperl-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-plpython.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-plpython-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-plpython3-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-pltcl-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-server-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 postgresql12-test-debuginfo.x86_64 12.2-2PGDG.rhel8 pgdg12 Nothing happens when I try to disable the postgresql module: sudo dnf -qy module disable postgresql Error: Problems in request: Modular dependency problems: Problem 1: conflicting requests - nothing provides module(perl:5.26) needed by module perl-DBD-SQLite:1.58:8010020190322125518:073fa5fe-0.x86_64 Problem 2: conflicting requests - nothing provides module(perl:5.26) needed by module perl-DBI:1.641:8010020190322130042:16b3ab4d-0.x86_64
|
postgresql, redhat, yum, dnf
| 2
| 11,098
| 2
|
https://stackoverflow.com/questions/60385105/i-am-unable-to-install-postgresql-12-on-red-hat-8-1-after-following-official-ins
|
59,329,759
|
Invalid SSL mode for remote PostgreSQL connection
|
For context: Using a Azure cloud instance of a PostGRES database. Using RHEL7 with openssl and openssl-dev installed. Using python2.7. I can import SSL in python2.7 shell without issue. I can connect to a locally hosted PostGRES database using psycopg2 without issue. When I try connecting to the remote database using sslmode='require' I receive an OperationalError that sslmode value "require" invalid when SSL support is not compiled in. Looking at the SSL settings for the PostGRES instance in Azure, I see that the SSL mode is "prefer", however if I try to use that for the psycopg2 connection, I'm told that a SSL connection is required. For the record, I have no issue connecting to this remote database using python3.7 from a Windows 10 machine. This leads me to believe that there isn't some configuration issue with the remote instance, and that the issue lies somewhere in RHEL7 and python2.7. Has anyone else ran into this issue? edit: Pyscopg2 was installed in a virtual environment using 'pip install psycopg2'
|
Invalid SSL mode for remote PostgreSQL connection For context: Using a Azure cloud instance of a PostGRES database. Using RHEL7 with openssl and openssl-dev installed. Using python2.7. I can import SSL in python2.7 shell without issue. I can connect to a locally hosted PostGRES database using psycopg2 without issue. When I try connecting to the remote database using sslmode='require' I receive an OperationalError that sslmode value "require" invalid when SSL support is not compiled in. Looking at the SSL settings for the PostGRES instance in Azure, I see that the SSL mode is "prefer", however if I try to use that for the psycopg2 connection, I'm told that a SSL connection is required. For the record, I have no issue connecting to this remote database using python3.7 from a Windows 10 machine. This leads me to believe that there isn't some configuration issue with the remote instance, and that the issue lies somewhere in RHEL7 and python2.7. Has anyone else ran into this issue? edit: Pyscopg2 was installed in a virtual environment using 'pip install psycopg2'
|
postgresql, python-2.7, redhat, psycopg2
| 2
| 2,702
| 1
|
https://stackoverflow.com/questions/59329759/invalid-ssl-mode-for-remote-postgresql-connection
|
57,082,540
|
mod_wsgi - Fatal Python error: initfsencoding: unable to load the file system codec
|
Using Red Hat, apache 2.4.6, worker mpm, mod_wsgi 4.6.5, and Python 3.7 When I start httpd I get the above error and: ModuleNotFoundError: No module named 'encodings' In the httpd error_log. I'm using a python virtual environment created from a python installed from source under my home directory. I installed mod_wsgi from source using --with-python= option pointing to the python binary in my virtual environment, then I copied the mod_wsgi.so file into my apache modules directory as mod_wsgi37.so I ran ldd on this file, and have a .conf file loading it into httpd like this: LoadFile /home/myUser/pythonbuild/lib/libpython3.7m.so.1.0 LoadModule wsgi_module modules/mod_wsgi37.so Then within my VirtualHost I have: WSGIDaemonProcess wsgi group=www threads=12 processes=2 python-path=/var/ www/wsgi-scripts python-home=/var/www/wsgi-scripts/wsgi_env3 WSGIProcessGroup wsgi WSGIScriptAlias /test /var/www/wsgi-scripts/test.py from my virtual environment: sys.prefix: '/var/www/wsgi-scripts/wsgi_env3' sys.real_prefix: '/home/myUser/pythonbuild' When I switch to the system-installed mod_wsgi/python combo (remove python-home line from WSGIDaemonProcess, and change the .conf file to load the original mod_wsgi.so) it works fine. It seems like some path variables aren't getting set properly. Is there another way to set variables like PYTHONHOME that I'm missing? How can I fix my install?
|
mod_wsgi - Fatal Python error: initfsencoding: unable to load the file system codec Using Red Hat, apache 2.4.6, worker mpm, mod_wsgi 4.6.5, and Python 3.7 When I start httpd I get the above error and: ModuleNotFoundError: No module named 'encodings' In the httpd error_log. I'm using a python virtual environment created from a python installed from source under my home directory. I installed mod_wsgi from source using --with-python= option pointing to the python binary in my virtual environment, then I copied the mod_wsgi.so file into my apache modules directory as mod_wsgi37.so I ran ldd on this file, and have a .conf file loading it into httpd like this: LoadFile /home/myUser/pythonbuild/lib/libpython3.7m.so.1.0 LoadModule wsgi_module modules/mod_wsgi37.so Then within my VirtualHost I have: WSGIDaemonProcess wsgi group=www threads=12 processes=2 python-path=/var/ www/wsgi-scripts python-home=/var/www/wsgi-scripts/wsgi_env3 WSGIProcessGroup wsgi WSGIScriptAlias /test /var/www/wsgi-scripts/test.py from my virtual environment: sys.prefix: '/var/www/wsgi-scripts/wsgi_env3' sys.real_prefix: '/home/myUser/pythonbuild' When I switch to the system-installed mod_wsgi/python combo (remove python-home line from WSGIDaemonProcess, and change the .conf file to load the original mod_wsgi.so) it works fine. It seems like some path variables aren't getting set properly. Is there another way to set variables like PYTHONHOME that I'm missing? How can I fix my install?
|
python, apache, redhat, wsgi
| 2
| 3,472
| 2
|
https://stackoverflow.com/questions/57082540/mod-wsgi-fatal-python-error-initfsencoding-unable-to-load-the-file-system-co
|
56,863,473
|
SELinux Policy Creation Error regarding Memory and Context Structure
|
I am working on RHEL 7. I have log files of another machine stored here. I used the following command to create policy : grep -inr "denied" audit.log* | audit2allow -M Policy_File_Name Using this command, I was able to create policy for many of log files. But in some cases I encountered this error : Traceback (most recent call last): File "/usr/bin/audit2allow", line 365, in <module> app.main() File "/usr/bin/audit2allow", line 352, in main self.__process_input() File "/usr/bin/audit2allow", line 180, in __process_input self.__avs = self.__parser.to_access() File "/usr/lib64/python2.7/site-packages/sepolgen/audit.py", line 591, in to_access avc.path = self.__restore_path(avc.name, avc.ino) File "/usr/lib64/python2.7/site-packages/sepolgen/audit.py", line 531, in __restore_path universal_newlines=True) File "/usr/lib64/python2.7/subprocess.py", line 568, in check_output process = Popen(stdout=PIPE, *popenargs, **kwargs) File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__ errread, errwrite) File "/usr/lib64/python2.7/subprocess.py", line 1224, in _execute_child self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory And for few I encountered this error : libsepol.context_from_record: type celery_t is not defined libsepol.context_from_record: could not create context structure libsepol.context_from_string: could not create context structure libsepol.sepol_context_to_sid: could not convert system_u:system_r:celery_t:s0 to sid Here 'celery_t' changes with respect to target context. System Condition : [root@selinux-policy-creation abhisheklog]# free -h total used free shared buff/cache available Mem: 31G 261M 27G 8.4M 3.1G 30GB Swap: 0B 0B 0B Please provide with Cause and Solution. Thanks.
|
SELinux Policy Creation Error regarding Memory and Context Structure I am working on RHEL 7. I have log files of another machine stored here. I used the following command to create policy : grep -inr "denied" audit.log* | audit2allow -M Policy_File_Name Using this command, I was able to create policy for many of log files. But in some cases I encountered this error : Traceback (most recent call last): File "/usr/bin/audit2allow", line 365, in <module> app.main() File "/usr/bin/audit2allow", line 352, in main self.__process_input() File "/usr/bin/audit2allow", line 180, in __process_input self.__avs = self.__parser.to_access() File "/usr/lib64/python2.7/site-packages/sepolgen/audit.py", line 591, in to_access avc.path = self.__restore_path(avc.name, avc.ino) File "/usr/lib64/python2.7/site-packages/sepolgen/audit.py", line 531, in __restore_path universal_newlines=True) File "/usr/lib64/python2.7/subprocess.py", line 568, in check_output process = Popen(stdout=PIPE, *popenargs, **kwargs) File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__ errread, errwrite) File "/usr/lib64/python2.7/subprocess.py", line 1224, in _execute_child self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory And for few I encountered this error : libsepol.context_from_record: type celery_t is not defined libsepol.context_from_record: could not create context structure libsepol.context_from_string: could not create context structure libsepol.sepol_context_to_sid: could not convert system_u:system_r:celery_t:s0 to sid Here 'celery_t' changes with respect to target context. System Condition : [root@selinux-policy-creation abhisheklog]# free -h total used free shared buff/cache available Mem: 31G 261M 27G 8.4M 3.1G 30GB Swap: 0B 0B 0B Please provide with Cause and Solution. Thanks.
|
memory, operating-system, redhat, policy, selinux
| 2
| 2,435
| 1
|
https://stackoverflow.com/questions/56863473/selinux-policy-creation-error-regarding-memory-and-context-structure
|
56,115,357
|
How can I make Python3.6, Red Hat Software Collection, persist after a reboot/logout/login?
|
I am trying to enable rh-python36 software collection after reboot So I can avoid calling "scl enable" all the time. After unzipping and installing the package: yum install -y tmp/rpms/* I created a new file "python36.sh" under /etc/profile.d with the following script: #!/bin/bash source /opt/rh/rh-python36/enable export X_SCLS="scl enable rh-python36 'echo $X_SCLS'" After restarting or rebooting the instance, I am getting : No such file or directoryenable I am using CentOS release 6.10 (Final)
|
How can I make Python3.6, Red Hat Software Collection, persist after a reboot/logout/login? I am trying to enable rh-python36 software collection after reboot So I can avoid calling "scl enable" all the time. After unzipping and installing the package: yum install -y tmp/rpms/* I created a new file "python36.sh" under /etc/profile.d with the following script: #!/bin/bash source /opt/rh/rh-python36/enable export X_SCLS="scl enable rh-python36 'echo $X_SCLS'" After restarting or rebooting the instance, I am getting : No such file or directoryenable I am using CentOS release 6.10 (Final)
|
python, linux, centos, redhat, software-collections
| 2
| 4,959
| 3
|
https://stackoverflow.com/questions/56115357/how-can-i-make-python3-6-red-hat-software-collection-persist-after-a-reboot-lo
|
55,024,627
|
How to move files where the first line contains a string?
|
I am currently using the following command: grep -l -Z -E '.*?FindMyRegex' /home/user/folder/*.csv | xargs -0 -I{} mv {} /home/destination/folder This works fine. The problem is it uses grep on the entire file. I would like to use the grep command on the FIRST line of the file only. I have tried to use head -1 file | at the beginning, but it did not work.
|
How to move files where the first line contains a string? I am currently using the following command: grep -l -Z -E '.*?FindMyRegex' /home/user/folder/*.csv | xargs -0 -I{} mv {} /home/destination/folder This works fine. The problem is it uses grep on the entire file. I would like to use the grep command on the FIRST line of the file only. I have tried to use head -1 file | at the beginning, but it did not work.
|
linux, bash, centos, redhat
| 2
| 915
| 6
|
https://stackoverflow.com/questions/55024627/how-to-move-files-where-the-first-line-contains-a-string
|
51,845,152
|
Enable redhats devtoolset in fish shell
|
Is there an appropriate way to enable devtoolset or any of the rh tools in the fish shell on startup? Normally in Zsh (~/.zshrc) or Bash (~/.bashrc) you would add lines similar to: source /opt/rh/devtoolset-7/enable or source scl_source enable devtoolset-7 Unfortunately neither of those work in the ~/.config/fish/config.fish since the syntax isn't supported by fish. The only way I know how to do it is manually add all the lines in the enable file to my fish paths.
|
Enable redhats devtoolset in fish shell Is there an appropriate way to enable devtoolset or any of the rh tools in the fish shell on startup? Normally in Zsh (~/.zshrc) or Bash (~/.bashrc) you would add lines similar to: source /opt/rh/devtoolset-7/enable or source scl_source enable devtoolset-7 Unfortunately neither of those work in the ~/.config/fish/config.fish since the syntax isn't supported by fish. The only way I know how to do it is manually add all the lines in the enable file to my fish paths.
|
linux, shell, redhat, fish, devtoolset
| 2
| 749
| 2
|
https://stackoverflow.com/questions/51845152/enable-redhats-devtoolset-in-fish-shell
|
51,105,209
|
NullPointerExceptions when using multiple concurrent KieSessions
|
We are facing NullPointerExceptions while StatelessKieSession while it internally dispose in concurrent execution environment. java.lang.NullPointerException at org.drools.core.impl.StatelessKnowledgeSessionImpl.dispose(StatelessKnowledgeSessionImpl.java:395) at org.drools.core.impl.StatelessKnowledgeSessionImpl.execute(StatelessKnowledgeSessionImpl.java:355) Sample code is public class ThreadExecutor { public static void main(String[] args){ ExecutorService submitAsyncPool = Executors.newCachedThreadPool(); Callable<Boolean> processor = new WorkerThread(); for (int i = 0; i < 50; i++) { submitAsyncPool.submit(processor); } } } public class WorkerThread implements Callable<Boolean> { @Autowired StatelessKieSession kieSession; @Override public Boolean call() { // some code snippet kieSession.execute(input); // some code snippet } } This is only happening in concurrent execution of rules. StatelessKieSessio n is shared across multiple threads and executed concurrently. Other option is to create StatelessKieSession every time which I think very expensive operation. Looks like this is defect in rules engine? Is there any workaround? Note: We are using Drools 6.x
|
NullPointerExceptions when using multiple concurrent KieSessions We are facing NullPointerExceptions while StatelessKieSession while it internally dispose in concurrent execution environment. java.lang.NullPointerException at org.drools.core.impl.StatelessKnowledgeSessionImpl.dispose(StatelessKnowledgeSessionImpl.java:395) at org.drools.core.impl.StatelessKnowledgeSessionImpl.execute(StatelessKnowledgeSessionImpl.java:355) Sample code is public class ThreadExecutor { public static void main(String[] args){ ExecutorService submitAsyncPool = Executors.newCachedThreadPool(); Callable<Boolean> processor = new WorkerThread(); for (int i = 0; i < 50; i++) { submitAsyncPool.submit(processor); } } } public class WorkerThread implements Callable<Boolean> { @Autowired StatelessKieSession kieSession; @Override public Boolean call() { // some code snippet kieSession.execute(input); // some code snippet } } This is only happening in concurrent execution of rules. StatelessKieSessio n is shared across multiple threads and executed concurrently. Other option is to create StatelessKieSession every time which I think very expensive operation. Looks like this is defect in rules engine? Is there any workaround? Note: We are using Drools 6.x
|
java, jboss, drools, redhat, jbpm
| 2
| 480
| 1
|
https://stackoverflow.com/questions/51105209/nullpointerexceptions-when-using-multiple-concurrent-kiesessions
|
51,076,789
|
Installing "en_US" in RHEL container
|
I'm testing an ansible role using molecule . The role install a corporate binary over which I've no insight, I'm just mean to ./binary --silent and that's it. Over RedHat. It work for a RedHat 6.9 VM. But it doesn't work over the docker container registry.access.redhat.com/rhel6:6.9. The error message says: "Operating system bad language (en_US not found)". What could be missing from the container that would be on the VM? Some localedef ...? I wasn't able to find a doc about this, but is there some RedHat description about the delta between their "minimal install from ISO" VMs and containers? Thanks for any help
|
Installing "en_US" in RHEL container I'm testing an ansible role using molecule . The role install a corporate binary over which I've no insight, I'm just mean to ./binary --silent and that's it. Over RedHat. It work for a RedHat 6.9 VM. But it doesn't work over the docker container registry.access.redhat.com/rhel6:6.9. The error message says: "Operating system bad language (en_US not found)". What could be missing from the container that would be on the VM? Some localedef ...? I wasn't able to find a doc about this, but is there some RedHat description about the delta between their "minimal install from ISO" VMs and containers? Thanks for any help
|
docker, redhat, redhat-containers
| 2
| 452
| 1
|
https://stackoverflow.com/questions/51076789/installing-en-us-in-rhel-container
|
50,375,782
|
npm run build redhat openshift deploy
|
I have a project built with node.js and react. Every time I build and and deploy, or every time the pod resets, I need to go in to the pod terminal and run 'npm run build' ("build": "react-scripts build"). Is there a way to automate this? (Maybe in my package.json scripts if redhat has specific scripts similar to "heroku-postbuild" or somewhere on the Openshift website?)
|
npm run build redhat openshift deploy I have a project built with node.js and react. Every time I build and and deploy, or every time the pod resets, I need to go in to the pod terminal and run 'npm run build' ("build": "react-scripts build"). Is there a way to automate this? (Maybe in my package.json scripts if redhat has specific scripts similar to "heroku-postbuild" or somewhere on the Openshift website?)
|
node.js, reactjs, openshift, redhat
| 2
| 1,274
| 1
|
https://stackoverflow.com/questions/50375782/npm-run-build-redhat-openshift-deploy
|
49,032,673
|
Apache 2.2/Redhat 2.6 with mod_wsgi
|
I'm having trouble configuring mod_wsgi with my current set up. Redhat 2.6.32 Installations setup as non-root user: Apache 2.2 (attempted to get 2.4, but without access to yum the dependencies were too much) Python 3.6 I seem to have successfully installed mod_wsgi into /apache/modules. Problems: The apache directory structure is not what most tutorials indicate, its DocumentRoot is in /apache/htdocs, not /var/www/ or /sites-enabled/ or /sites/available/ I tried putting: LoadModule wsgi_module modules/mod_wsgi.so in httpd.conf but I am returned: $HOME/apache/modules/mod_wsgi.so into server: libpython3.6m.so.1.0: cannot open shared object file: No such file or directory Can anyone explain how I can use mod_wsgi with my current setup?
|
Apache 2.2/Redhat 2.6 with mod_wsgi I'm having trouble configuring mod_wsgi with my current set up. Redhat 2.6.32 Installations setup as non-root user: Apache 2.2 (attempted to get 2.4, but without access to yum the dependencies were too much) Python 3.6 I seem to have successfully installed mod_wsgi into /apache/modules. Problems: The apache directory structure is not what most tutorials indicate, its DocumentRoot is in /apache/htdocs, not /var/www/ or /sites-enabled/ or /sites/available/ I tried putting: LoadModule wsgi_module modules/mod_wsgi.so in httpd.conf but I am returned: $HOME/apache/modules/mod_wsgi.so into server: libpython3.6m.so.1.0: cannot open shared object file: No such file or directory Can anyone explain how I can use mod_wsgi with my current setup?
|
mod-wsgi, redhat, python-3.6, apache2.2
| 2
| 694
| 2
|
https://stackoverflow.com/questions/49032673/apache-2-2-redhat-2-6-with-mod-wsgi
|
48,653,475
|
can't find /var/log/cloud-init-output.log on redhat ec2 instance
|
I usually work on amazon linux ec2 instance and i check /var/log/cloud-init-output.log to see if my cloudformation user data script is working or not. I can't find cloud-init-output.log on redhat ec2 instance and i am not sure where to check the logs and how to make sure that my user data script is working properly.
|
can't find /var/log/cloud-init-output.log on redhat ec2 instance I usually work on amazon linux ec2 instance and i check /var/log/cloud-init-output.log to see if my cloudformation user data script is working or not. I can't find cloud-init-output.log on redhat ec2 instance and i am not sure where to check the logs and how to make sure that my user data script is working properly.
|
amazon-web-services, redhat, aws-cloudformation, cloud-init
| 2
| 2,548
| 1
|
https://stackoverflow.com/questions/48653475/cant-find-var-log-cloud-init-output-log-on-redhat-ec2-instance
|
46,915,444
|
Can SonarQube on Redhat Enterprise Edition use Windows integrated security with a SQLServer database?
|
SonarQube documentation [URL] implies that a Nix server would not be able to use a SQLServer database since it specifies use of a JDBC security DLL. Is it possible to use Windows Integrated security for SQlServer on Windows Server with a Redhat Enterprise Linux application server running the SonarQube app server that is connected to the domain AD with SSSD (System Security Services Daemon)? Another way to put this - does SonarQube Server running on Redhat Enterprise Linux support integrated security with Kerberos for a SQL Server database?
|
Can SonarQube on Redhat Enterprise Edition use Windows integrated security with a SQLServer database? SonarQube documentation [URL] implies that a Nix server would not be able to use a SQLServer database since it specifies use of a JDBC security DLL. Is it possible to use Windows Integrated security for SQlServer on Windows Server with a Redhat Enterprise Linux application server running the SonarQube app server that is connected to the domain AD with SSSD (System Security Services Daemon)? Another way to put this - does SonarQube Server running on Redhat Enterprise Linux support integrated security with Kerberos for a SQL Server database?
|
sql-server, sonarqube, redhat
| 2
| 318
| 2
|
https://stackoverflow.com/questions/46915444/can-sonarqube-on-redhat-enterprise-edition-use-windows-integrated-security-with
|
46,898,854
|
openshift v3 online pro volume and memory limit issues
|
I am trying to run an sonatype/nexus3 on openshift online v3 pro. If I just use the web console to create a new app from image it assigns it only 512Mi and it dies with OOM. It did get created though and logged a lot of java output before it died of out of memory. When using the web console there doesnt appear a way to set the memory on the image. When I try to edited the yaml of the pod it doesn't let me edited the memory limit. Reading the docs about memory limits it suggests that I can run with this: oc run nexus333 --image=sonatype/nexus3 --limits=memory=750Mi Then it doesn't even start. It dies with: {kubelet ip-172-31-59-148.ec2.internal} Error: Error response from daemon: {"message":"create c30deb38b3c26252bf1218cc898fbf1c68d8fc14e840076710c211d58ed87a59: mkdir /var/lib/docker/volumes/c30deb38b3c26252bf1218cc898fbf1c68d8fc14e840076710c211d58ed87a59: permission denied"} More information from oc get events : FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 16m 16m 1 nexus333-1-deploy Pod Normal Scheduled {default-scheduler } Successfully assigned nexus333-1-deploy to ip-172-31-50-97.ec2.internal 16m 16m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Pulling {kubelet ip-172-31-50-97.ec2.internal} pulling image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.6.173.0.21" 16m 16m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Pulled {kubelet ip-172-31-50-97.ec2.internal} Successfully pulled image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.6.173.0.21" 15m 15m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Created {kubelet ip-172-31-50-97.ec2.internal} Created container 15m 15m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Started {kubelet ip-172-31-50-97.ec2.internal} Started container 15m 15m 1 nexus333-1-rftvd Pod Normal Scheduled {default-scheduler } Successfully assigned nexus333-1-rftvd to ip-172-31-59-148.ec2.internal 15m 14m 7 nexus333-1-rftvd Pod spec.containers{nexus333} Normal Pulling {kubelet ip-172-31-59-148.ec2.internal} pulling image "sonatype/nexus3" 15m 10m 19 nexus333-1-rftvd Pod spec.containers{nexus333} Normal Pulled {kubelet ip-172-31-59-148.ec2.internal} Successfully pulled image "sonatype/nexus3" 15m 15m 1 nexus333-1-rftvd Pod spec.containers{nexus333} Warning Failed {kubelet ip-172-31-59-148.ec2.internal} Error: Error response from daemon: {"message":"create 3aa35201bdf81d09ef4b09bba1fc843b97d0339acfef0c30cecaa1fbb6207321: mkdir /var/lib/docker/volumes/3aa35201bdf81d09ef4b09bba1fc843b97d0339acfef0c30cecaa1fbb6207321: permission denied"} I am not sure why if I use the web console I cannot assign more memory. I am not sure why running it with oc run dies with the mkdir error. Can anyone tell me how to run sonatype/nexus3 on openshift online pro?
|
openshift v3 online pro volume and memory limit issues I am trying to run an sonatype/nexus3 on openshift online v3 pro. If I just use the web console to create a new app from image it assigns it only 512Mi and it dies with OOM. It did get created though and logged a lot of java output before it died of out of memory. When using the web console there doesnt appear a way to set the memory on the image. When I try to edited the yaml of the pod it doesn't let me edited the memory limit. Reading the docs about memory limits it suggests that I can run with this: oc run nexus333 --image=sonatype/nexus3 --limits=memory=750Mi Then it doesn't even start. It dies with: {kubelet ip-172-31-59-148.ec2.internal} Error: Error response from daemon: {"message":"create c30deb38b3c26252bf1218cc898fbf1c68d8fc14e840076710c211d58ed87a59: mkdir /var/lib/docker/volumes/c30deb38b3c26252bf1218cc898fbf1c68d8fc14e840076710c211d58ed87a59: permission denied"} More information from oc get events : FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 16m 16m 1 nexus333-1-deploy Pod Normal Scheduled {default-scheduler } Successfully assigned nexus333-1-deploy to ip-172-31-50-97.ec2.internal 16m 16m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Pulling {kubelet ip-172-31-50-97.ec2.internal} pulling image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.6.173.0.21" 16m 16m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Pulled {kubelet ip-172-31-50-97.ec2.internal} Successfully pulled image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.6.173.0.21" 15m 15m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Created {kubelet ip-172-31-50-97.ec2.internal} Created container 15m 15m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Started {kubelet ip-172-31-50-97.ec2.internal} Started container 15m 15m 1 nexus333-1-rftvd Pod Normal Scheduled {default-scheduler } Successfully assigned nexus333-1-rftvd to ip-172-31-59-148.ec2.internal 15m 14m 7 nexus333-1-rftvd Pod spec.containers{nexus333} Normal Pulling {kubelet ip-172-31-59-148.ec2.internal} pulling image "sonatype/nexus3" 15m 10m 19 nexus333-1-rftvd Pod spec.containers{nexus333} Normal Pulled {kubelet ip-172-31-59-148.ec2.internal} Successfully pulled image "sonatype/nexus3" 15m 15m 1 nexus333-1-rftvd Pod spec.containers{nexus333} Warning Failed {kubelet ip-172-31-59-148.ec2.internal} Error: Error response from daemon: {"message":"create 3aa35201bdf81d09ef4b09bba1fc843b97d0339acfef0c30cecaa1fbb6207321: mkdir /var/lib/docker/volumes/3aa35201bdf81d09ef4b09bba1fc843b97d0339acfef0c30cecaa1fbb6207321: permission denied"} I am not sure why if I use the web console I cannot assign more memory. I am not sure why running it with oc run dies with the mkdir error. Can anyone tell me how to run sonatype/nexus3 on openshift online pro?
|
openshift, redhat, nexus, sonatype, openshift-online
| 2
| 477
| 3
|
https://stackoverflow.com/questions/46898854/openshift-v3-online-pro-volume-and-memory-limit-issues
|
46,442,839
|
Installation of OCI8 : how to correct "Use of undefined constant OCI_COMMIT_ON_SUCCESS" error?
|
I'm trying to install OCI8 on a RedHat Server (RHEL7) for my Apache Server. At this moment, when I try to connect to my server with Symphony, I get this error: Exception "ErrorException" : Use of undefined constant OCI_COMMIT_ON_SUCCESS - assumed 'OCI_COMMIT_ON_SUCCESS' Here is what I did to install OCI8. Installation of oracle-instantclient11.2 RPMs (devel and basic). Installation of the OCI8 package : For information, I already have an Oracle 12C on my server but I want to connect my PHP application to another server (Oracle 11GR2). tar zxvf oci8-2.1.7.tgz cd oci8-2.1.7 phpize ./configure --with-oci8=shared,instantclient,/usr/lib/oracle/11.2/client64/lib --with-php-config=/opt/rh/rh-php56/root/usr/bin/php-config make make install Edit php.ini in order to add extension=oci8.so . I found this thread and I tried to used oci_connect and I get this error: Fatal error: Call to undefined function oci_connect() How can I correct this problem? EDIT: I just found this error in php_error.log : [26-Sep-2017 16:14:12 Europe/Paris] PHP Warning: PHP Startup: Unable to load dynamic library '/opt/rh/rh-php56/root/usr/lib64/php/modules/oci8.so' - /opt/rh/rh-php56/root/usr/lib64/php/modules/oci8.so: undefined symbol: _emalloc_128 in Unknown on line 0
|
Installation of OCI8 : how to correct "Use of undefined constant OCI_COMMIT_ON_SUCCESS" error? I'm trying to install OCI8 on a RedHat Server (RHEL7) for my Apache Server. At this moment, when I try to connect to my server with Symphony, I get this error: Exception "ErrorException" : Use of undefined constant OCI_COMMIT_ON_SUCCESS - assumed 'OCI_COMMIT_ON_SUCCESS' Here is what I did to install OCI8. Installation of oracle-instantclient11.2 RPMs (devel and basic). Installation of the OCI8 package : For information, I already have an Oracle 12C on my server but I want to connect my PHP application to another server (Oracle 11GR2). tar zxvf oci8-2.1.7.tgz cd oci8-2.1.7 phpize ./configure --with-oci8=shared,instantclient,/usr/lib/oracle/11.2/client64/lib --with-php-config=/opt/rh/rh-php56/root/usr/bin/php-config make make install Edit php.ini in order to add extension=oci8.so . I found this thread and I tried to used oci_connect and I get this error: Fatal error: Call to undefined function oci_connect() How can I correct this problem? EDIT: I just found this error in php_error.log : [26-Sep-2017 16:14:12 Europe/Paris] PHP Warning: PHP Startup: Unable to load dynamic library '/opt/rh/rh-php56/root/usr/lib64/php/modules/oci8.so' - /opt/rh/rh-php56/root/usr/lib64/php/modules/oci8.so: undefined symbol: _emalloc_128 in Unknown on line 0
|
php, oracle-database, redhat, oci8
| 2
| 5,982
| 1
|
https://stackoverflow.com/questions/46442839/installation-of-oci8-how-to-correct-use-of-undefined-constant-oci-commit-on-s
|
45,943,999
|
Initialize postgres with data-checksums
|
I'm trying to enable PostgreSQL 9.6 og Redhat 7 with data-checksums enabled. According to the doc you can run initdb either with the -k flag or --data-checksums. But when I try to run /usr/pgsql-9.6/bin/postgresql96-setup initdb --data-checksums it does not work. Any ideas about how to achieve this?
|
Initialize postgres with data-checksums I'm trying to enable PostgreSQL 9.6 og Redhat 7 with data-checksums enabled. According to the doc you can run initdb either with the -k flag or --data-checksums. But when I try to run /usr/pgsql-9.6/bin/postgresql96-setup initdb --data-checksums it does not work. Any ideas about how to achieve this?
|
postgresql, redhat
| 2
| 2,862
| 3
|
https://stackoverflow.com/questions/45943999/initialize-postgres-with-data-checksums
|
45,456,681
|
Openshift Online 3 Pro - Collaboration Option
|
I just purchased on OpenShift Online 3 pro account yesterday. I want to share Web Console with another member of my team but it doesn't seem to be possible: all my teamates get a "You do not have access to Openshift Online" error message when trying to reach Web Console URL. FYI: I have granted them with the "admin" role in the Resources -> Membership page (try to use both their email address and pseudo) All of them use an OpenShift Online 3 Starter account I fear that they have to purchase an OpenShift Online v3 Pro account to be able to proceed. Am i right? If no, can you explain me how can I allow them to use the Web Console? Thank you.
|
Openshift Online 3 Pro - Collaboration Option I just purchased on OpenShift Online 3 pro account yesterday. I want to share Web Console with another member of my team but it doesn't seem to be possible: all my teamates get a "You do not have access to Openshift Online" error message when trying to reach Web Console URL. FYI: I have granted them with the "admin" role in the Resources -> Membership page (try to use both their email address and pseudo) All of them use an OpenShift Online 3 Starter account I fear that they have to purchase an OpenShift Online v3 Pro account to be able to proceed. Am i right? If no, can you explain me how can I allow them to use the Web Console? Thank you.
|
openshift, redhat
| 2
| 74
| 1
|
https://stackoverflow.com/questions/45456681/openshift-online-3-pro-collaboration-option
|
45,332,399
|
Connect Orion Context Broker running in RedHat to Raspberry Pi
|
I'm using a virtual machine with a RedHat operating system in which I implemented the Orion Context Broker from Fiware and trying to connect to a Raspberry-Pi in order to collect data from a microphone. Can anybody help?
|
Connect Orion Context Broker running in RedHat to Raspberry Pi I'm using a virtual machine with a RedHat operating system in which I implemented the Orion Context Broker from Fiware and trying to connect to a Raspberry-Pi in order to collect data from a microphone. Can anybody help?
|
audio, redhat, raspberry-pi3, fiware, fiware-orion
| 2
| 107
| 1
|
https://stackoverflow.com/questions/45332399/connect-orion-context-broker-running-in-redhat-to-raspberry-pi
|
43,842,405
|
Is there a way to stop beeline from generating CR character?
|
When I redirect the output of beeline to a file, I can see that the file generated has ^M (CR, carriage return, 0x0D hex) character inside which is placed at around column 144, presumably as a way to wrap around text output. Is there a way to turn this off in beeline? Or maybe inform beeline of a different column width. I have: Beeline version 1.2.1000.2.5.0.0-1245 by Apache Hive
|
Is there a way to stop beeline from generating CR character? When I redirect the output of beeline to a file, I can see that the file generated has ^M (CR, carriage return, 0x0D hex) character inside which is placed at around column 144, presumably as a way to wrap around text output. Is there a way to turn this off in beeline? Or maybe inform beeline of a different column width. I have: Beeline version 1.2.1000.2.5.0.0-1245 by Apache Hive
|
hive, redhat, hiveql, beeline, bigdata
| 2
| 772
| 2
|
https://stackoverflow.com/questions/43842405/is-there-a-way-to-stop-beeline-from-generating-cr-character
|
42,260,195
|
Pre-built Erlang/OTP for RHEL
|
I need to deploy Phoenix/Elixir app onto a Redhat 7 server, which needs Erlang OTP installed. on the Erlang site, I don't see pre-built binary package for Redhat Linux. Can I use the CentOS version for RHEL?
|
Pre-built Erlang/OTP for RHEL I need to deploy Phoenix/Elixir app onto a Redhat 7 server, which needs Erlang OTP installed. on the Erlang site, I don't see pre-built binary package for Redhat Linux. Can I use the CentOS version for RHEL?
|
erlang, elixir, redhat, phoenix-framework
| 2
| 760
| 3
|
https://stackoverflow.com/questions/42260195/pre-built-erlang-otp-for-rhel
|
39,103,853
|
Bash script to find filesystem usage
|
EDIT: Working script below I have used this site MANY times to get answers, but I am a little stumped with this. I am tasked with writing a script, in bash, to log into roughly 2000 Unix servers (Solaris, AIX, Linux) and check the size of OS filesystems, most notable /var /usr /opt. I have set some variables, which may be where I am going wrong right off the bat. 1.) First I am connecting to another server that has a list of all hosts in the infrastructure. Then I parse this data with some sed commands to get a list I can use properly 1.) Then I do a ping test, to see if the server is alive. If the server is decom. The idea behind this, is if the server is not pingable, I don't want it being reported on, or any attempt to be made to connect to it, as it is just wasting time. I feel I am doing this wrong, but don't know how to do it corectly (a re-occurring theme you will here in this post lol) If any FS is over 80% mark, then it should output to a text file with the servername, filesystem, size on one line <== very important for me If the FS is under 80% full, then I don't want it in my output, it can me omitted completely. I have created something that I will post below, and am hoping to get some help in figuring out where I am going wrong. I am very new to bash scripting, but have experience as a Unix admin (i have never been good at scripting). Can anyone provide some direction and teach me where I am going wrong? I will upload my script that i can confirm is working hopefully tomorrow. thanks everyone for your input in this!
|
Bash script to find filesystem usage EDIT: Working script below I have used this site MANY times to get answers, but I am a little stumped with this. I am tasked with writing a script, in bash, to log into roughly 2000 Unix servers (Solaris, AIX, Linux) and check the size of OS filesystems, most notable /var /usr /opt. I have set some variables, which may be where I am going wrong right off the bat. 1.) First I am connecting to another server that has a list of all hosts in the infrastructure. Then I parse this data with some sed commands to get a list I can use properly 1.) Then I do a ping test, to see if the server is alive. If the server is decom. The idea behind this, is if the server is not pingable, I don't want it being reported on, or any attempt to be made to connect to it, as it is just wasting time. I feel I am doing this wrong, but don't know how to do it corectly (a re-occurring theme you will here in this post lol) If any FS is over 80% mark, then it should output to a text file with the servername, filesystem, size on one line <== very important for me If the FS is under 80% full, then I don't want it in my output, it can me omitted completely. I have created something that I will post below, and am hoping to get some help in figuring out where I am going wrong. I am very new to bash scripting, but have experience as a Unix admin (i have never been good at scripting). Can anyone provide some direction and teach me where I am going wrong? I will upload my script that i can confirm is working hopefully tomorrow. thanks everyone for your input in this!
|
linux, bash, shell, solaris, redhat
| 2
| 4,330
| 3
|
https://stackoverflow.com/questions/39103853/bash-script-to-find-filesystem-usage
|
37,918,466
|
pyodbc not working on RedHat 5.4. Trying to connect to ms-sql database server using unixODBC and FreeTDS?
|
I am facing issue while trying to access ms-sql database using pyobdc. Here is the System config: Python 2.7.11 Pyodbc 3.0.7 RedHat 5.4 (Tikanga) 32 Bit system Microsoft SQL Server 2012 (Database server) unixODBC 2.3.0 $ tsql -C output : Compile-time settings (established with the "configure" script) Version: freetds v0.91 freetds.conf directory: /etc MS db-lib source compatibility: yes Sybase binary compatibility: no Thread safety: yes iconv library: yes TDS version: 5.0 iODBC: no unixodbc: yes SSPI "trusted" logins: no Kerberos: no $ odbcinst -j output : unixODBC 2.3.0 DRIVERS............: /usr/local/etc/odbcinst.ini SYSTEM DATA SOURCES: /usr/local/etc/odbc.ini FILE DATA SOURCES..: /usr/local/etc/ODBCDataSources USER DATA SOURCES..: /root/.odbc.ini SQLULEN Size.......: 4 SQLLEN Size........: 4 SQLSETPOSIROW Size.: 2 $ cat /usr/local/etc/odbcinst.ini output : [ms-sql] Description=TDS connection Driver=/usr/local/lib/libtdsodbc.so Setup=/usr/local/lib/libtdsodbc.so FileUsage=1 UsageCount=1 $ cat /usr/local/etc/odbc.ini output : [sqlserverdatasource] Driver = ms-sql Description = ODBC connection via ms-sql Trace = No Server = >IP Addresss To Database server< Port = >Port Number< Database = >Database name< $ cat /etc/freetds.conf output : [sql-server] host = >IP Addresss To Database server< port = >Port Number< tds version = 8.0 Command which is giving me error: connection = pyodbc.connect(r'DRIVER={FreeTDS};SERVER=>IP Addresss To Database server<; PORT=>Port Number<;DATABASE=Database name;UID=Database UID;PWD=DatabasePasswd;') Error: Traceback (most recent call last): File "<stdin>", line 1, in <module> pyodbc.Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnect)') I am trying to solve this problem for last 3 days. But no luck yet. So any help/suggestion would be very helpful. I have already gone through googling. Thanks in advance :)
|
pyodbc not working on RedHat 5.4. Trying to connect to ms-sql database server using unixODBC and FreeTDS? I am facing issue while trying to access ms-sql database using pyobdc. Here is the System config: Python 2.7.11 Pyodbc 3.0.7 RedHat 5.4 (Tikanga) 32 Bit system Microsoft SQL Server 2012 (Database server) unixODBC 2.3.0 $ tsql -C output : Compile-time settings (established with the "configure" script) Version: freetds v0.91 freetds.conf directory: /etc MS db-lib source compatibility: yes Sybase binary compatibility: no Thread safety: yes iconv library: yes TDS version: 5.0 iODBC: no unixodbc: yes SSPI "trusted" logins: no Kerberos: no $ odbcinst -j output : unixODBC 2.3.0 DRIVERS............: /usr/local/etc/odbcinst.ini SYSTEM DATA SOURCES: /usr/local/etc/odbc.ini FILE DATA SOURCES..: /usr/local/etc/ODBCDataSources USER DATA SOURCES..: /root/.odbc.ini SQLULEN Size.......: 4 SQLLEN Size........: 4 SQLSETPOSIROW Size.: 2 $ cat /usr/local/etc/odbcinst.ini output : [ms-sql] Description=TDS connection Driver=/usr/local/lib/libtdsodbc.so Setup=/usr/local/lib/libtdsodbc.so FileUsage=1 UsageCount=1 $ cat /usr/local/etc/odbc.ini output : [sqlserverdatasource] Driver = ms-sql Description = ODBC connection via ms-sql Trace = No Server = >IP Addresss To Database server< Port = >Port Number< Database = >Database name< $ cat /etc/freetds.conf output : [sql-server] host = >IP Addresss To Database server< port = >Port Number< tds version = 8.0 Command which is giving me error: connection = pyodbc.connect(r'DRIVER={FreeTDS};SERVER=>IP Addresss To Database server<; PORT=>Port Number<;DATABASE=Database name;UID=Database UID;PWD=DatabasePasswd;') Error: Traceback (most recent call last): File "<stdin>", line 1, in <module> pyodbc.Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnect)') I am trying to solve this problem for last 3 days. But no luck yet. So any help/suggestion would be very helpful. I have already gone through googling. Thanks in advance :)
|
sql-server, redhat, pyodbc, freetds, unixodbc
| 2
| 1,287
| 1
|
https://stackoverflow.com/questions/37918466/pyodbc-not-working-on-redhat-5-4-trying-to-connect-to-ms-sql-database-server-us
|
37,123,286
|
Why won't rpm/yum pick up the required packages when I list it specifically?
|
I'm having a problem where using rpm and yum won't pick up the packages required for an update. I'm performing an upgrade of main-package from 16.1 to 16.2. If I do yum upgrade , I get this: # yum upgrade ... ====================================================================================================== Package Arch Version Repository Size ====================================================================================================== Updating: sub-package x86_64 1.1-455015.el7 privaterepo 29 k main-package noarch 16.2-460032.el7 privaterepo 1.9 M ... If I run yum upgrade main-package I get this: # yum upgrade main-package ====================================================================================================== Package Arch Version Repository Size ====================================================================================================== Updating: main-package noarch 16.2-460032.el7 privaterepo 1.9 M Transaction Summary ====================================================================================================== It doesn't seem to think I need the new sub-package , even though the RPM suggests it does: # rpm -q --requires -p main-package-16.2-460032.el7.noarch.rpm | grep -i sub-package sub-package >= 1.1 # rpm -qa | grep sub-package sub-package-1.0-455013.el7.x86_64 Based on what I see, when I yum upgrade main-package , it should see that it needs sub-package >= 1.1 and get it as well. I should add that the install works fine. It's as if rpm and yum are completely ignoring the requirement that main-package needs version 1.1 of sub-package . EDIT: Here is what rpm shows about dependencies: # rpm -q --provides -p sub-package-1.1-455015.el7.x86_64.rpm sub-package sub-package = 1.1-455015.el7 sub-package(x86-64) = 1.1-455015.el7 # rpm -q --requires -p main-package-16.2-460032.el7.noarch.rpm | grep sub-package sub-package >= 1.1 Here is the older sub-package that's already installed: # rpm -q --provides sub-package sub-package sub-package = 1.0-455013.el7 sub-package(x86-64) = 1.0-455013.el7 Here is the relevant information in my spec file: $ grep sub-package main-package.spec Requires: sub-package >= 1.1 $ head -n4 sub-package.spec Summary: sub-package (...) Name: sub-package Version: 1.1 Release: %{BUILD_NUMBER}%{?dist} EDIT 2: I've been doing some more digging, One thing I noticed is that sub-package is listed twice if I rpm -q --whatprovides sub-package where the other dependencies that it picks up fine are only listed once.
|
Why won't rpm/yum pick up the required packages when I list it specifically? I'm having a problem where using rpm and yum won't pick up the packages required for an update. I'm performing an upgrade of main-package from 16.1 to 16.2. If I do yum upgrade , I get this: # yum upgrade ... ====================================================================================================== Package Arch Version Repository Size ====================================================================================================== Updating: sub-package x86_64 1.1-455015.el7 privaterepo 29 k main-package noarch 16.2-460032.el7 privaterepo 1.9 M ... If I run yum upgrade main-package I get this: # yum upgrade main-package ====================================================================================================== Package Arch Version Repository Size ====================================================================================================== Updating: main-package noarch 16.2-460032.el7 privaterepo 1.9 M Transaction Summary ====================================================================================================== It doesn't seem to think I need the new sub-package , even though the RPM suggests it does: # rpm -q --requires -p main-package-16.2-460032.el7.noarch.rpm | grep -i sub-package sub-package >= 1.1 # rpm -qa | grep sub-package sub-package-1.0-455013.el7.x86_64 Based on what I see, when I yum upgrade main-package , it should see that it needs sub-package >= 1.1 and get it as well. I should add that the install works fine. It's as if rpm and yum are completely ignoring the requirement that main-package needs version 1.1 of sub-package . EDIT: Here is what rpm shows about dependencies: # rpm -q --provides -p sub-package-1.1-455015.el7.x86_64.rpm sub-package sub-package = 1.1-455015.el7 sub-package(x86-64) = 1.1-455015.el7 # rpm -q --requires -p main-package-16.2-460032.el7.noarch.rpm | grep sub-package sub-package >= 1.1 Here is the older sub-package that's already installed: # rpm -q --provides sub-package sub-package sub-package = 1.0-455013.el7 sub-package(x86-64) = 1.0-455013.el7 Here is the relevant information in my spec file: $ grep sub-package main-package.spec Requires: sub-package >= 1.1 $ head -n4 sub-package.spec Summary: sub-package (...) Name: sub-package Version: 1.1 Release: %{BUILD_NUMBER}%{?dist} EDIT 2: I've been doing some more digging, One thing I noticed is that sub-package is listed twice if I rpm -q --whatprovides sub-package where the other dependencies that it picks up fine are only listed once.
|
redhat, rpm, yum
| 2
| 675
| 1
|
https://stackoverflow.com/questions/37123286/why-wont-rpm-yum-pick-up-the-required-packages-when-i-list-it-specifically
|
36,962,836
|
Install and Configure Ansible on AWS EC2 Redhat Instance
|
I have just started learning Ansible configuration management tool and I was going through Linux Academy tutorials to run implement ansible commands, everything was good and easy with the linux-academy servers but when I tried to replicate the same in AWS EC2 instance i was unable to locate the "cd /etc/ansible/hosts". I have installed ansible using pip command i.e., "$sudo pip install ansible". I have been tried to resolve the issue but unable to find any proper documentation. The links I tried to install and configure ansible are as follows: [URL] [URL] Guide me to configure the ansible hosts path to run the ansible commands and playbooks according to my requirements.
|
Install and Configure Ansible on AWS EC2 Redhat Instance I have just started learning Ansible configuration management tool and I was going through Linux Academy tutorials to run implement ansible commands, everything was good and easy with the linux-academy servers but when I tried to replicate the same in AWS EC2 instance i was unable to locate the "cd /etc/ansible/hosts". I have installed ansible using pip command i.e., "$sudo pip install ansible". I have been tried to resolve the issue but unable to find any proper documentation. The links I tried to install and configure ansible are as follows: [URL] [URL] Guide me to configure the ansible hosts path to run the ansible commands and playbooks according to my requirements.
|
amazon-ec2, configuration, installation, ansible, redhat
| 2
| 7,707
| 3
|
https://stackoverflow.com/questions/36962836/install-and-configure-ansible-on-aws-ec2-redhat-instance
|
35,704,418
|
Compiling Python3.5 on RedHat 6.4 - missing tkinter
|
Did any of you encountered an issue with missing tkInter when trying to compile the new Python from source on redhat 6? "The necessary bits to build these optional modules were not found: _tkinter To find the necessary bits, look in setup.py in detect_modules() for the module's name. Failed to build these modules: binascii zlib" It's a company interal machine. I've got an access to yum, but that's it. Yum is only finding tkInter version related to system Python which is 2.6.6. Is there any tkInter dependency that i might be missing here? The list was longer, but installing few libraries helped. I'm still stuck with that last one and running out of ideas. I appreciate your help.
|
Compiling Python3.5 on RedHat 6.4 - missing tkinter Did any of you encountered an issue with missing tkInter when trying to compile the new Python from source on redhat 6? "The necessary bits to build these optional modules were not found: _tkinter To find the necessary bits, look in setup.py in detect_modules() for the module's name. Failed to build these modules: binascii zlib" It's a company interal machine. I've got an access to yum, but that's it. Yum is only finding tkInter version related to system Python which is 2.6.6. Is there any tkInter dependency that i might be missing here? The list was longer, but installing few libraries helped. I'm still stuck with that last one and running out of ideas. I appreciate your help.
|
python, tkinter, redhat, zlib, binascii
| 2
| 1,940
| 1
|
https://stackoverflow.com/questions/35704418/compiling-python3-5-on-redhat-6-4-missing-tkinter
|
34,459,528
|
fail to upgrade docker on redhat7
|
Currently docker 1.7.1 was installed on my machine, I want to upgrade it to latest version by below steps. 1. service docker stop 2. wget [URL] -O /usr/bin/docker 3. service docker start But I met issues when I executed the third step. [root@xxx ~]# service docker start Redirecting to /bin/systemctl start docker.service Job for docker.service failed. See 'systemctl status docker.service' and 'journalctl -xn' for details. Then I execute the two command to get more info like below [root@xxxxx ~]# systemctl status docker.service docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled) Active: failed (Result: exit-code) since Thu 2015-12-24 21:18:20 EST; 22s ago Docs: [URL] Process: 28160 ExecStart=/usr/bin/docker -d $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY (code=exited, status=2) Main PID: 28160 (code=exited, status=2) Dec 24 21:18:20 abc.host.com systemd[1]: Starting Docker Application Container Engine... Dec 24 21:18:20 abc.host.com docker[28160]: Warning: '-d' is deprecated, it will be removed soon. See usage. Dec 24 21:18:20 abc.host.com docker[28160]: flag provided but not defined: --add-registry Dec 24 21:18:20 abc.host.com docker[28160]: See '/usr/bin/docker --help'. Dec 24 21:18:20 abc.host.com systemd[1]: docker.service: main process exited, code=exited, status=2/INVALIDARGUMENT Dec 24 21:18:20 abc.host.com systemd[1]: Failed to start Docker Application Container Engine. Dec 24 21:18:20 abc.host.com systemd[1]: Unit docker.service entered failed state. [root@xxxxx ~]# journalctl -xn -- Logs begin at Mon 2015-12-07 06:26:15 EST, end at Thu 2015-12-24 21:18:20 EST. -- Dec 24 21:18:20 abc.host.com systemd[1]: docker-storage-setup.service: main process exited, code=exited, status=1/FAILURE Dec 24 21:18:20 abc.host.com systemd[1]: Failed to start Docker Storage Setup. -- Subject: Unit docker-storage-setup.service has failed -- Defined-By: systemd -- Support: [URL] -- -- Unit docker-storage-setup.service has failed. -- -- The result is failed. Dec 24 21:18:20 abc.host.com systemd[1]: Unit docker-storage-setup.service entered failed state. Dec 24 21:18:20 abc.host.com systemd[1]: Starting Docker Application Container Engine... -- Subject: Unit docker.service has begun with start-up -- Defined-By: systemd -- Support: [URL] -- -- Unit docker.service has begun starting up. Dec 24 21:18:20 abc.host.com docker[28160]: Warning: '-d' is deprecated, it will be removed soon. See usage. Dec 24 21:18:20 abc.host.com docker[28160]: flag provided but not defined: --add-registry Dec 24 21:18:20 abc.host.com docker[28160]: See '/usr/bin/docker --help'. Dec 24 21:18:20 abc.host.com systemd[1]: docker.service: main process exited, code=exited, status=2/INVALIDARGUMENT Dec 24 21:18:20 abc.host.com systemd[1]: Failed to start Docker Application Container Engine. -- Subject: Unit docker.service has failed -- Defined-By: systemd -- Support: [URL] -- -- Unit docker.service has failed. -- -- The result is failed. Dec 24 21:18:20 abc.host.com systemd[1]: Unit docker.service entered failed state.
|
fail to upgrade docker on redhat7 Currently docker 1.7.1 was installed on my machine, I want to upgrade it to latest version by below steps. 1. service docker stop 2. wget [URL] -O /usr/bin/docker 3. service docker start But I met issues when I executed the third step. [root@xxx ~]# service docker start Redirecting to /bin/systemctl start docker.service Job for docker.service failed. See 'systemctl status docker.service' and 'journalctl -xn' for details. Then I execute the two command to get more info like below [root@xxxxx ~]# systemctl status docker.service docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled) Active: failed (Result: exit-code) since Thu 2015-12-24 21:18:20 EST; 22s ago Docs: [URL] Process: 28160 ExecStart=/usr/bin/docker -d $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY (code=exited, status=2) Main PID: 28160 (code=exited, status=2) Dec 24 21:18:20 abc.host.com systemd[1]: Starting Docker Application Container Engine... Dec 24 21:18:20 abc.host.com docker[28160]: Warning: '-d' is deprecated, it will be removed soon. See usage. Dec 24 21:18:20 abc.host.com docker[28160]: flag provided but not defined: --add-registry Dec 24 21:18:20 abc.host.com docker[28160]: See '/usr/bin/docker --help'. Dec 24 21:18:20 abc.host.com systemd[1]: docker.service: main process exited, code=exited, status=2/INVALIDARGUMENT Dec 24 21:18:20 abc.host.com systemd[1]: Failed to start Docker Application Container Engine. Dec 24 21:18:20 abc.host.com systemd[1]: Unit docker.service entered failed state. [root@xxxxx ~]# journalctl -xn -- Logs begin at Mon 2015-12-07 06:26:15 EST, end at Thu 2015-12-24 21:18:20 EST. -- Dec 24 21:18:20 abc.host.com systemd[1]: docker-storage-setup.service: main process exited, code=exited, status=1/FAILURE Dec 24 21:18:20 abc.host.com systemd[1]: Failed to start Docker Storage Setup. -- Subject: Unit docker-storage-setup.service has failed -- Defined-By: systemd -- Support: [URL] -- -- Unit docker-storage-setup.service has failed. -- -- The result is failed. Dec 24 21:18:20 abc.host.com systemd[1]: Unit docker-storage-setup.service entered failed state. Dec 24 21:18:20 abc.host.com systemd[1]: Starting Docker Application Container Engine... -- Subject: Unit docker.service has begun with start-up -- Defined-By: systemd -- Support: [URL] -- -- Unit docker.service has begun starting up. Dec 24 21:18:20 abc.host.com docker[28160]: Warning: '-d' is deprecated, it will be removed soon. See usage. Dec 24 21:18:20 abc.host.com docker[28160]: flag provided but not defined: --add-registry Dec 24 21:18:20 abc.host.com docker[28160]: See '/usr/bin/docker --help'. Dec 24 21:18:20 abc.host.com systemd[1]: docker.service: main process exited, code=exited, status=2/INVALIDARGUMENT Dec 24 21:18:20 abc.host.com systemd[1]: Failed to start Docker Application Container Engine. -- Subject: Unit docker.service has failed -- Defined-By: systemd -- Support: [URL] -- -- Unit docker.service has failed. -- -- The result is failed. Dec 24 21:18:20 abc.host.com systemd[1]: Unit docker.service entered failed state.
|
docker, redhat
| 2
| 913
| 2
|
https://stackoverflow.com/questions/34459528/fail-to-upgrade-docker-on-redhat7
|
34,316,513
|
nginx permission denied error on get
|
For some reason i'm getting permission denied error while reading the file using nginx and in rhel6, here is my output of the log file tail -f /var/log/nginx/ph-repo.error.log and the logs says "/opt/nginx/nginx-1.8.0-1.el6.ngx.x86_64.rpm" failed (13: Permission denied), client: 10.20.5.236, server: my-repo, request: "GET /nginx/nginx-1.8.0-1.el6.ngx.x86_64.rpm HTTP/1.1", host: "my-repo" When i check the permission of the file it's 777 [root@my-repo]# ls -l nginx/nginx-1.8.0-1.el6.ngx.x86_64.rpm -rwxrwxrwx. 1 root root 360628 Oct 23 02:59 nginx/nginx-1.8.0-1.el6.ngx.x86_64.rpm The nginx process is also running as root [root@ph-repo]# ps -elf | grep nginx 1 S root 1527 1 0 80 0 - 11195 rt_sig 09:48 ? 00:00:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf 5 S root 1528 1527 0 80 0 - 11378 ep_pol 09:48 ? 00:00:00 nginx: worker process 0 S root 3062 2258 0 80 0 - 25827 pipe_w 10:52 pts/1 00:00:00 grep nginx The ACL [root@my-repo]# getfacl nginx # file: nginx # owner: root # group: root user::rwx group::rwx other::rwx [root@my-repo]# getfacl nginx/nginx-1.8.0-1.el6.ngx.x86_64.rpm # file: nginx/nginx-1.8.0-1.el6.ngx.x86_64.rpm # owner: root # group: root user::rwx group::rwx other::rwx I'm not sure what's wrong is happening here can some one help me on this please
|
nginx permission denied error on get For some reason i'm getting permission denied error while reading the file using nginx and in rhel6, here is my output of the log file tail -f /var/log/nginx/ph-repo.error.log and the logs says "/opt/nginx/nginx-1.8.0-1.el6.ngx.x86_64.rpm" failed (13: Permission denied), client: 10.20.5.236, server: my-repo, request: "GET /nginx/nginx-1.8.0-1.el6.ngx.x86_64.rpm HTTP/1.1", host: "my-repo" When i check the permission of the file it's 777 [root@my-repo]# ls -l nginx/nginx-1.8.0-1.el6.ngx.x86_64.rpm -rwxrwxrwx. 1 root root 360628 Oct 23 02:59 nginx/nginx-1.8.0-1.el6.ngx.x86_64.rpm The nginx process is also running as root [root@ph-repo]# ps -elf | grep nginx 1 S root 1527 1 0 80 0 - 11195 rt_sig 09:48 ? 00:00:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf 5 S root 1528 1527 0 80 0 - 11378 ep_pol 09:48 ? 00:00:00 nginx: worker process 0 S root 3062 2258 0 80 0 - 25827 pipe_w 10:52 pts/1 00:00:00 grep nginx The ACL [root@my-repo]# getfacl nginx # file: nginx # owner: root # group: root user::rwx group::rwx other::rwx [root@my-repo]# getfacl nginx/nginx-1.8.0-1.el6.ngx.x86_64.rpm # file: nginx/nginx-1.8.0-1.el6.ngx.x86_64.rpm # owner: root # group: root user::rwx group::rwx other::rwx I'm not sure what's wrong is happening here can some one help me on this please
|
linux, nginx, redhat
| 2
| 2,385
| 2
|
https://stackoverflow.com/questions/34316513/nginx-permission-denied-error-on-get
|
34,290,315
|
PHP version in terminal differs from the one in the browser
|
I'm using RHEL 6.6 and Apache 2.2.15 . When I type php -v into terminal I get the right version: 5.6.11 Unfortunately in the web browser phpinfo() returns: 5.3.3 The server only has access to Intranet, so I can't use things like Yum. Despite the fact that there are clearly two different php versions installed, there is only one file libphp5.so and it is linked in the httpd.conf file. Additionally when I type php -i into the console I get the result: Loaded Configuration file: none Any idea how to force Apache to use the new version?
|
PHP version in terminal differs from the one in the browser I'm using RHEL 6.6 and Apache 2.2.15 . When I type php -v into terminal I get the right version: 5.6.11 Unfortunately in the web browser phpinfo() returns: 5.3.3 The server only has access to Intranet, so I can't use things like Yum. Despite the fact that there are clearly two different php versions installed, there is only one file libphp5.so and it is linked in the httpd.conf file. Additionally when I type php -i into the console I get the result: Loaded Configuration file: none Any idea how to force Apache to use the new version?
|
php, apache, redhat, rhel
| 2
| 193
| 1
|
https://stackoverflow.com/questions/34290315/php-version-in-terminal-differs-from-the-one-in-the-browser
|
34,161,416
|
How to run sudo access without password in unix
|
I have set up sudoers.d file unix as below:-- User_Alias OOZIEUSERS1 = user1, user2 Runas_Alias APP1 = oozie Cmnd_Alias SU_APP1 = /bin/su - oozie OOZIEUSERS1 ALL = (root) SU_APP1 OOZIEUSERS1 ALL = (APP1) ALL However by setting in the above way everytime whenever I login as say user1 and then do following:- sudo su - oozie Its asking for password of the user. How can I implement such that "oozie"(appln ID) doesn't ask for password at all for all users.
|
How to run sudo access without password in unix I have set up sudoers.d file unix as below:-- User_Alias OOZIEUSERS1 = user1, user2 Runas_Alias APP1 = oozie Cmnd_Alias SU_APP1 = /bin/su - oozie OOZIEUSERS1 ALL = (root) SU_APP1 OOZIEUSERS1 ALL = (APP1) ALL However by setting in the above way everytime whenever I login as say user1 and then do following:- sudo su - oozie Its asking for password of the user. How can I implement such that "oozie"(appln ID) doesn't ask for password at all for all users.
|
linux, unix, redhat, sudoers
| 2
| 439
| 2
|
https://stackoverflow.com/questions/34161416/how-to-run-sudo-access-without-password-in-unix
|
32,306,322
|
Unix scripting in bash to search logs and return a specific part of a specific log file
|
Dare I say it, I am mainly a Windows person (please don't shoot me down too soon), although I have played around in Linux in the past (mostly command line). I have a process I have to go through once in a while which is in essence searching all log files in a directory (and sub directories) for a certain filename and then getting something out of said log file. My first step is grep -Ril <filename or Partial filename you are looking for> log/*.log From that I have the log filename and I vi that to find where it occurs. To clarify: that grep is looking through all log files seeing if the filename after the -Ril occurs within them. vi log/<log filename> /<filename or Partial filename you are looking for> I do j a couple of times to find CDATA, and then I have a URL I need to extract, then in putty do a select, copy and paste it into a browser. Then I quit vi without saving. FRED1 triggered at Mon Aug 31 14:09:31 NZST 2015 with incoming file /u03/incoming/fred/Fred.2 Fred.2 start grep end grep Renamed to Fred.2.20150831140931 <?xml version="1.0" encoding="UTF-8"?> <runResponse><runReturn><item><name>runId</name><value>1703775</value></item><item><name>runHistoryId</name><value>1703775</value></item><item><name>runReportUrl</name><value>[URL] and path>b1a&sp=l0&sp=l1703775&sp=l1703775</value></item><item><name>displayRunReportUrl</name><value><![CDATA[[URL] and path2>&sp=l1703775&sp=l1703775]]></value></item><item><name>runStartTime</name><value>08/31/15 14:09</value></item><item><name>flowResponse</name><value></value></item><item><name>flowResult</name><value></value></item><item><name>flowReturnCode</name><value>Not a Return</value></item></runReturn></runResponse> filePath=/u03/incoming/fred&fileName=Fred.2.20150831140931&team=dps&direction=incoming&size=31108&time=Aug 31 14:09&fts=nzlssftsd01 ---------------------------------------------------------------------------------------- FRED1 triggered at Mon Aug 31 14:09:31 NZST 2015 with incoming file /u03/incoming/fred/Fred.3 Fred.3 start grep end grep Renamed to Fred.3.20150999999999 <?xml version="1.0" encoding="UTF-8"?> <runResponse><runReturn><item><name>runId</name><value>1703775</value></item><item><name>runHistoryId</name><value>1703775</value></item><item><name>runReportUrl</name><value>[URL] and path>b1a&sp=l0&sp=l999999&sp=l9999999</value></item><item><name>displayRunReportUrl</name><value><![CDATA[[URL] and path2>&sp=l999999&sp=l999999]]></value></item><item><name>runStartTime</name><value>08/31/15 14:09</value></item><item><name>flowResponse</name><value></value></item><item><name>flowResult</name><value></value></item><item><name>flowReturnCode</name><value>Not a Return</value></item></runReturn></runResponse> filePath=/u03/incoming/fred&fileName=Fred.3.20150999999999&team=dps&direction=incoming&size=31108&time=Aug 31 14:09&fts=nzlssftsd01 What I want to grab is the URL in CDATA[[URL] and path2>&sp=l999999&sp=l999999] for Fred.3.20150999999999 indicated by the line Renamed to Fred.3.20150999999999 . Is this possible? (And I do apologise by the XML formatting, but it is exactly as it is in the log file.) Thanks in advance, Tel
|
Unix scripting in bash to search logs and return a specific part of a specific log file Dare I say it, I am mainly a Windows person (please don't shoot me down too soon), although I have played around in Linux in the past (mostly command line). I have a process I have to go through once in a while which is in essence searching all log files in a directory (and sub directories) for a certain filename and then getting something out of said log file. My first step is grep -Ril <filename or Partial filename you are looking for> log/*.log From that I have the log filename and I vi that to find where it occurs. To clarify: that grep is looking through all log files seeing if the filename after the -Ril occurs within them. vi log/<log filename> /<filename or Partial filename you are looking for> I do j a couple of times to find CDATA, and then I have a URL I need to extract, then in putty do a select, copy and paste it into a browser. Then I quit vi without saving. FRED1 triggered at Mon Aug 31 14:09:31 NZST 2015 with incoming file /u03/incoming/fred/Fred.2 Fred.2 start grep end grep Renamed to Fred.2.20150831140931 <?xml version="1.0" encoding="UTF-8"?> <runResponse><runReturn><item><name>runId</name><value>1703775</value></item><item><name>runHistoryId</name><value>1703775</value></item><item><name>runReportUrl</name><value>[URL] and path>b1a&sp=l0&sp=l1703775&sp=l1703775</value></item><item><name>displayRunReportUrl</name><value><![CDATA[[URL] and path2>&sp=l1703775&sp=l1703775]]></value></item><item><name>runStartTime</name><value>08/31/15 14:09</value></item><item><name>flowResponse</name><value></value></item><item><name>flowResult</name><value></value></item><item><name>flowReturnCode</name><value>Not a Return</value></item></runReturn></runResponse> filePath=/u03/incoming/fred&fileName=Fred.2.20150831140931&team=dps&direction=incoming&size=31108&time=Aug 31 14:09&fts=nzlssftsd01 ---------------------------------------------------------------------------------------- FRED1 triggered at Mon Aug 31 14:09:31 NZST 2015 with incoming file /u03/incoming/fred/Fred.3 Fred.3 start grep end grep Renamed to Fred.3.20150999999999 <?xml version="1.0" encoding="UTF-8"?> <runResponse><runReturn><item><name>runId</name><value>1703775</value></item><item><name>runHistoryId</name><value>1703775</value></item><item><name>runReportUrl</name><value>[URL] and path>b1a&sp=l0&sp=l999999&sp=l9999999</value></item><item><name>displayRunReportUrl</name><value><![CDATA[[URL] and path2>&sp=l999999&sp=l999999]]></value></item><item><name>runStartTime</name><value>08/31/15 14:09</value></item><item><name>flowResponse</name><value></value></item><item><name>flowResult</name><value></value></item><item><name>flowReturnCode</name><value>Not a Return</value></item></runReturn></runResponse> filePath=/u03/incoming/fred&fileName=Fred.3.20150999999999&team=dps&direction=incoming&size=31108&time=Aug 31 14:09&fts=nzlssftsd01 What I want to grab is the URL in CDATA[[URL] and path2>&sp=l999999&sp=l999999] for Fred.3.20150999999999 indicated by the line Renamed to Fred.3.20150999999999 . Is this possible? (And I do apologise by the XML formatting, but it is exactly as it is in the log file.) Thanks in advance, Tel
|
bash, scripting, redhat
| 2
| 791
| 2
|
https://stackoverflow.com/questions/32306322/unix-scripting-in-bash-to-search-logs-and-return-a-specific-part-of-a-specific-l
|
30,904,631
|
How to install WebSphere MQ resource adapter (wmq.jmsra.rar) in JBoss 6.2 EAP?
|
Design: I have a queue manager (EXAMPLE.QM) with Server-connection channel (EXAMPLE.CHANNEL), request queue (EXAMPLE.TEST.QUEUE), and reply queue (EXAMPLE.TEST.REPLY). My application will be using a message driven bean (MDB) to listen on EXAMPLE.TEST.QUEUE. When message arrives an instance of MDB is created and business logic is done which includes quering databases and then the reply is put on the EXAMPLE.TEST.REPLY queue. This is one transaction. In the event of crashes or any failure the exception will be caught and everything will be rolled back. I wanted to do the connection pooling for both MQ and Databases on the server side. Setup: WebSphere MQ 7.0.1, JBoss 6.2 EAP, Java 1.7.0_21, IBM DB2 9.7 I obtained the wmq.jmsra.rar from the MQ_INSTALLATION_PATH\java\lib\jca and I also got the com.ibm.mqetclient.jar As per Redhat installation guide in order to support XATransactions I repackaged the wmq.jmsra.rar to include com.ibm.mqetclient.jar using command jar -uf wmq.jmsra.rar com.ibm.mqetclient.jar You can skip the next paragraph and look at the xml snippet provided below for same information. After doing so instead manually dropping the wmq.jmsra.rar into JBoss deployment directory I used the management console. I then went ahead and added in profile view under Resource adapters. I set Archive to wmq.jmsra.rar and TX to XATransaction. I then set the properties to the following: logWriterEnabled - true, maxConnections - 10, reconnectionRetryCount - 5, traceLevel - 6, traceEnabled - true, reconnectionRetryInterval - 300000, and connectionConcurrency - 5. After doing so I added a connection definition. I named it WMQ_ConnectionFactory, JNDI - java:jboss/WMQ_ConnectionFactory, and Connection Class - com.ibm.mq.connector.outbound.ManagedConnectionFactoryImpl. I set the properties as follow: port - 1414, hostName - localhost, channel - EXAMPLE.CHANNEL, transportType - BINDINGS_THEN_CLIENT, failIfQuiesce - true, and queueManager - EXAMPLE.QM. I then went on to add 2 Admin Objects. 1st I named EXAMPLE_REQ_Queue, JNDI - java:jboss/EXAMPLE_REQ_Queue, and Class name - com.ibm.mq.connector.outbound.MQQueueProxy. I have it the following properties: useJNDI - true, readAheadClosePolicy - ALL, startTimeout - 10000, destination - EXAMPLE.TEST.REQUEST, and destinationType - javax.jms.Queue. The 2nd admin object I named EXAMPLE_REP_Queue, JNDI - java:jboss/EXAMPLE_REP-Queue, and class name - com.ibm.mq.connector.outbound.MQQueueProxy. I gave it the following properties: failifQuiesce - true, baseQueueManagerName - EXAMPLE.QM, persistence - HIGH, encoding - NNN, baseQueueName - EXAMPLE.TEST.REPLY, targetClient - MQ, and expiry 300000. Here is a snippet from the standalone.xml file <subsystem xmlns="urn:jboss:domain:resource-adapters:1.1"> <resource-adapters> <resource-adapter id="wmq.jmsra.rar"> <archive> wmq.jmsra.rar </archive> <transaction-support>XATransaction</transaction-support> <config-property name="logWriterEnabled"> true </config-property> <config-property name="maxConnections"> 10 </config-property> <config-property name="traceEnabled"> true </config-property> <config-property name="traceLevel"> 6 </config-property> <config-property name="reconnectionRetryCount"> 5 </config-property> <config-property name="reconnectionRetryInterval"> 300000 </config-property> <config-property name="connectionConcurrency"> 5 </config-property> <connection-definitions> <connection-definition class-name="com.ibm.mq.connector.outbound.ManagedConnectionFactoryImpl" jndi-name="java:jboss/WMQ_ConnectionFacotry" enabled="true" pool-name="WMQ_ConnectionFactory"> <config-property name="port"> 1414 </config-property> <config-property name="hostName"> localhost </config-property> <config-property name="channel"> EXAMPLE.CHANNEL </config-property> <config-property name="failIfQuiesce"> true </config-property> <config-property name="transportType"> BINDINGS_THEN_CLIENT </config-property> <config-property name="queueManager"> EXAMPLE.QM </config-property> <security> <application/> </security> <validation> <background-validation>false</background-validation> </validation> </connection-definition> </connection-definitions> <admin-objects> <admin-object class-name="com.ibm.mq.connector.outbound.MQQueueProxy" jndi-name="java:jboss/EXAMPLE_REQ_Queue" enabled="true" use-java-context="false" pool-name="EXAMPLE_REQ_Queue"> <config-property name="useJNDI"> true </config-property> <config-property name="startTimeout"> 10000 </config-property> <config-property name="destination"> EXAMPLE.TEST.REQUEST </config-property> <config-property name="readAheadClosePolicy"> ALL </config-property> </admin-object> <admin-object class-name="com.ibm.mq.connector.outbound.MQQueueProxy" jndi-name="java:jboss/EXAMPLE_REP_Queue" enabled="true" use-java-context="false" pool-name="EXAMPLE_REP_Queue"> <config-property name="failIfQuiesce"> true </config-property> <config-property name="baseQueueManagerName"> EXAMPLE.QM </config-property> <config-property name="persistence"> HIGH </config-property> <config-property name="encoding"> NNN </config-property> <config-property name="baseQueueName"> EXAMPLE.TEST.REPLY </config-property> <config-property name="targetClient"> MQ </config-property> <config-property name="expiry"> 300000 </config-property> </admin-object> </admin-objects> </resource-adapter> </resource-adapters> </subsystem> The problem: I get the following exception: 15:54:53,325 ERROR [org.jboss.msc.service.fail] (ResourceAdapterDeploymentService Thread Pool -- 1) MSC000001: Failed to start service jboss.ra.deployment."wmq.jmsra.rar": org.jboss.msc.service.StartException in service jboss.ra.deployment."wmq.jmsra.rar": JBAS010446: Failed to start RA deployment [wmq.jmsra] at org.jboss.as.connector.services.resourceadapters.deployment.AbstractResourceAdapterDeploymentService$1.run(AbstractResourceAdapterDeploymentService.java:279) [jboss-as-connector-7.3.0.Final-redhat-14.jar:7.3.0.Final-redhat-14] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_21] at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_21] at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.1.1.Final-redhat-1.jar:2.1.1.Final-redhat-1] Caused by: org.jboss.jca.deployers.common.DeployException: IJ020060: Unable to inject: com.ibm.mq.connector.outbound.MQQueueProxy property: destination value: EXAMPLE.TEST.REQUEST at org.jboss.jca.deployers.common.AbstractResourceAdapterDeployer.initAdminObject(AbstractResourceAdapterDeployer.java:907) [ironjacamar-deployers-common-1.0.23.Final-redhat-1.jar:1.0.23.Final-redhat-1] at org.jboss.jca.deployers.common.AbstractResourceAdapterDeployer.createObjectsAndInjectValue(AbstractResourceAdapterDeployer.java:2382) [ironjacamar-deployers-common-1.0.23.Final-redhat-1.jar:1.0.23.Final-redhat-1] at org.jboss.as.connector.services.resourceadapters.deployment.ResourceAdapterXmlDeploymentService$AS7RaXmlDeployer.doDeploy(ResourceAdapterXmlDeploymentService.java:185) [jboss-as-connector-7.3.0.Final-redhat-14.jar:7.3.0.Final-redhat-14] at org.jboss.as.connector.services.resourceadapters.deployment.ResourceAdapterXmlDeploymentService.start(ResourceAdapterXmlDeploymentService.java:106) [jboss-as-connector-7.3.0.Final-redhat-14.jar:7.3.0.Final-redhat-14] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.4.GA-redhat-1.jar:1.0.4.GA-redhat-1] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.4.GA-redhat-1.jar:1.0.4.GA-redhat-1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_21] at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_21] 15:54:53,343 INFO [org.jboss.as.server] (Controller Boot Thread) JBAS018559: Deployed "wmq.jmsra.rar" (runtime-name : "wmq.jmsra.rar") 15:54:53,344 INFO [org.jboss.as.controller] (Controller Boot Thread) JBAS014774: Service status report JBAS014777: Services which failed to start: service jboss.ra.deployment."wmq.jmsra.rar": org.jboss.msc.service.StartException in service jboss.ra.deployment."wmq.jmsra.rar": JBAS010446: Failed to start RA deployment [wmq.jmsra] I guess the main part is Caused by: org.jboss.jca.deployers.common.DeployException: IJ020060: Unable to inject: com.ibm.mq.connector.outbound.MQQueueProxy property: destination value: EXAMPLE.TEST.REQUEST Prior to this I had the same error and instead it said destinationType value: javax.jms.Queue. I then went ahead and removed that property and tried again and now I got this error. I am not certain what to do next. Tutorials I have been following: IBM - The WebSphere MQ resource adapter , Redhat Jboss Documentation - JCA Architecture Chapter, and Oracle - Message Driven Beans Java EE6 tutorial My rep only allows me to post 2 links so the last two tutorials are not linked. Any help will be greatly appreciated.
|
How to install WebSphere MQ resource adapter (wmq.jmsra.rar) in JBoss 6.2 EAP? Design: I have a queue manager (EXAMPLE.QM) with Server-connection channel (EXAMPLE.CHANNEL), request queue (EXAMPLE.TEST.QUEUE), and reply queue (EXAMPLE.TEST.REPLY). My application will be using a message driven bean (MDB) to listen on EXAMPLE.TEST.QUEUE. When message arrives an instance of MDB is created and business logic is done which includes quering databases and then the reply is put on the EXAMPLE.TEST.REPLY queue. This is one transaction. In the event of crashes or any failure the exception will be caught and everything will be rolled back. I wanted to do the connection pooling for both MQ and Databases on the server side. Setup: WebSphere MQ 7.0.1, JBoss 6.2 EAP, Java 1.7.0_21, IBM DB2 9.7 I obtained the wmq.jmsra.rar from the MQ_INSTALLATION_PATH\java\lib\jca and I also got the com.ibm.mqetclient.jar As per Redhat installation guide in order to support XATransactions I repackaged the wmq.jmsra.rar to include com.ibm.mqetclient.jar using command jar -uf wmq.jmsra.rar com.ibm.mqetclient.jar You can skip the next paragraph and look at the xml snippet provided below for same information. After doing so instead manually dropping the wmq.jmsra.rar into JBoss deployment directory I used the management console. I then went ahead and added in profile view under Resource adapters. I set Archive to wmq.jmsra.rar and TX to XATransaction. I then set the properties to the following: logWriterEnabled - true, maxConnections - 10, reconnectionRetryCount - 5, traceLevel - 6, traceEnabled - true, reconnectionRetryInterval - 300000, and connectionConcurrency - 5. After doing so I added a connection definition. I named it WMQ_ConnectionFactory, JNDI - java:jboss/WMQ_ConnectionFactory, and Connection Class - com.ibm.mq.connector.outbound.ManagedConnectionFactoryImpl. I set the properties as follow: port - 1414, hostName - localhost, channel - EXAMPLE.CHANNEL, transportType - BINDINGS_THEN_CLIENT, failIfQuiesce - true, and queueManager - EXAMPLE.QM. I then went on to add 2 Admin Objects. 1st I named EXAMPLE_REQ_Queue, JNDI - java:jboss/EXAMPLE_REQ_Queue, and Class name - com.ibm.mq.connector.outbound.MQQueueProxy. I have it the following properties: useJNDI - true, readAheadClosePolicy - ALL, startTimeout - 10000, destination - EXAMPLE.TEST.REQUEST, and destinationType - javax.jms.Queue. The 2nd admin object I named EXAMPLE_REP_Queue, JNDI - java:jboss/EXAMPLE_REP-Queue, and class name - com.ibm.mq.connector.outbound.MQQueueProxy. I gave it the following properties: failifQuiesce - true, baseQueueManagerName - EXAMPLE.QM, persistence - HIGH, encoding - NNN, baseQueueName - EXAMPLE.TEST.REPLY, targetClient - MQ, and expiry 300000. Here is a snippet from the standalone.xml file <subsystem xmlns="urn:jboss:domain:resource-adapters:1.1"> <resource-adapters> <resource-adapter id="wmq.jmsra.rar"> <archive> wmq.jmsra.rar </archive> <transaction-support>XATransaction</transaction-support> <config-property name="logWriterEnabled"> true </config-property> <config-property name="maxConnections"> 10 </config-property> <config-property name="traceEnabled"> true </config-property> <config-property name="traceLevel"> 6 </config-property> <config-property name="reconnectionRetryCount"> 5 </config-property> <config-property name="reconnectionRetryInterval"> 300000 </config-property> <config-property name="connectionConcurrency"> 5 </config-property> <connection-definitions> <connection-definition class-name="com.ibm.mq.connector.outbound.ManagedConnectionFactoryImpl" jndi-name="java:jboss/WMQ_ConnectionFacotry" enabled="true" pool-name="WMQ_ConnectionFactory"> <config-property name="port"> 1414 </config-property> <config-property name="hostName"> localhost </config-property> <config-property name="channel"> EXAMPLE.CHANNEL </config-property> <config-property name="failIfQuiesce"> true </config-property> <config-property name="transportType"> BINDINGS_THEN_CLIENT </config-property> <config-property name="queueManager"> EXAMPLE.QM </config-property> <security> <application/> </security> <validation> <background-validation>false</background-validation> </validation> </connection-definition> </connection-definitions> <admin-objects> <admin-object class-name="com.ibm.mq.connector.outbound.MQQueueProxy" jndi-name="java:jboss/EXAMPLE_REQ_Queue" enabled="true" use-java-context="false" pool-name="EXAMPLE_REQ_Queue"> <config-property name="useJNDI"> true </config-property> <config-property name="startTimeout"> 10000 </config-property> <config-property name="destination"> EXAMPLE.TEST.REQUEST </config-property> <config-property name="readAheadClosePolicy"> ALL </config-property> </admin-object> <admin-object class-name="com.ibm.mq.connector.outbound.MQQueueProxy" jndi-name="java:jboss/EXAMPLE_REP_Queue" enabled="true" use-java-context="false" pool-name="EXAMPLE_REP_Queue"> <config-property name="failIfQuiesce"> true </config-property> <config-property name="baseQueueManagerName"> EXAMPLE.QM </config-property> <config-property name="persistence"> HIGH </config-property> <config-property name="encoding"> NNN </config-property> <config-property name="baseQueueName"> EXAMPLE.TEST.REPLY </config-property> <config-property name="targetClient"> MQ </config-property> <config-property name="expiry"> 300000 </config-property> </admin-object> </admin-objects> </resource-adapter> </resource-adapters> </subsystem> The problem: I get the following exception: 15:54:53,325 ERROR [org.jboss.msc.service.fail] (ResourceAdapterDeploymentService Thread Pool -- 1) MSC000001: Failed to start service jboss.ra.deployment."wmq.jmsra.rar": org.jboss.msc.service.StartException in service jboss.ra.deployment."wmq.jmsra.rar": JBAS010446: Failed to start RA deployment [wmq.jmsra] at org.jboss.as.connector.services.resourceadapters.deployment.AbstractResourceAdapterDeploymentService$1.run(AbstractResourceAdapterDeploymentService.java:279) [jboss-as-connector-7.3.0.Final-redhat-14.jar:7.3.0.Final-redhat-14] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_21] at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_21] at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.1.1.Final-redhat-1.jar:2.1.1.Final-redhat-1] Caused by: org.jboss.jca.deployers.common.DeployException: IJ020060: Unable to inject: com.ibm.mq.connector.outbound.MQQueueProxy property: destination value: EXAMPLE.TEST.REQUEST at org.jboss.jca.deployers.common.AbstractResourceAdapterDeployer.initAdminObject(AbstractResourceAdapterDeployer.java:907) [ironjacamar-deployers-common-1.0.23.Final-redhat-1.jar:1.0.23.Final-redhat-1] at org.jboss.jca.deployers.common.AbstractResourceAdapterDeployer.createObjectsAndInjectValue(AbstractResourceAdapterDeployer.java:2382) [ironjacamar-deployers-common-1.0.23.Final-redhat-1.jar:1.0.23.Final-redhat-1] at org.jboss.as.connector.services.resourceadapters.deployment.ResourceAdapterXmlDeploymentService$AS7RaXmlDeployer.doDeploy(ResourceAdapterXmlDeploymentService.java:185) [jboss-as-connector-7.3.0.Final-redhat-14.jar:7.3.0.Final-redhat-14] at org.jboss.as.connector.services.resourceadapters.deployment.ResourceAdapterXmlDeploymentService.start(ResourceAdapterXmlDeploymentService.java:106) [jboss-as-connector-7.3.0.Final-redhat-14.jar:7.3.0.Final-redhat-14] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.4.GA-redhat-1.jar:1.0.4.GA-redhat-1] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.4.GA-redhat-1.jar:1.0.4.GA-redhat-1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_21] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_21] at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_21] 15:54:53,343 INFO [org.jboss.as.server] (Controller Boot Thread) JBAS018559: Deployed "wmq.jmsra.rar" (runtime-name : "wmq.jmsra.rar") 15:54:53,344 INFO [org.jboss.as.controller] (Controller Boot Thread) JBAS014774: Service status report JBAS014777: Services which failed to start: service jboss.ra.deployment."wmq.jmsra.rar": org.jboss.msc.service.StartException in service jboss.ra.deployment."wmq.jmsra.rar": JBAS010446: Failed to start RA deployment [wmq.jmsra] I guess the main part is Caused by: org.jboss.jca.deployers.common.DeployException: IJ020060: Unable to inject: com.ibm.mq.connector.outbound.MQQueueProxy property: destination value: EXAMPLE.TEST.REQUEST Prior to this I had the same error and instead it said destinationType value: javax.jms.Queue. I then went ahead and removed that property and tried again and now I got this error. I am not certain what to do next. Tutorials I have been following: IBM - The WebSphere MQ resource adapter , Redhat Jboss Documentation - JCA Architecture Chapter, and Oracle - Message Driven Beans Java EE6 tutorial My rep only allows me to post 2 links so the last two tutorials are not linked. Any help will be greatly appreciated.
|
jboss, connection-pooling, ibm-mq, redhat, message-driven-bean
| 2
| 10,503
| 2
|
https://stackoverflow.com/questions/30904631/how-to-install-websphere-mq-resource-adapter-wmq-jmsra-rar-in-jboss-6-2-eap
|
30,510,482
|
How to filter Remote Syslog messages on Red Hat?
|
I'm using a unified log on a server running Red Hat 6 , receiving directed log messages from others servers and managing them with RSyslog . Until now, the /etc/rsyslog.conf have this rule: if $fromhost-ip startswith '172.20.' then /var/log/mylog.log But I don't want to log messages that contains "kernel" and "dnat", so I want to filter all messages, enhancing the rule. How can I do that?
|
How to filter Remote Syslog messages on Red Hat? I'm using a unified log on a server running Red Hat 6 , receiving directed log messages from others servers and managing them with RSyslog . Until now, the /etc/rsyslog.conf have this rule: if $fromhost-ip startswith '172.20.' then /var/log/mylog.log But I don't want to log messages that contains "kernel" and "dnat", so I want to filter all messages, enhancing the rule. How can I do that?
|
linux, filter, redhat, rsyslog
| 2
| 1,647
| 1
|
https://stackoverflow.com/questions/30510482/how-to-filter-remote-syslog-messages-on-red-hat
|
30,206,034
|
OpenShift Hub Creation vs Create New Application
|
I just created an OpenShift free account. When I logged in, I see there's a link called "Create new application". Also, there's an OpenShift Hub where you deploy. What's the difference between "Create new application" vs. creating it via OpenShift Hub? When do you use one way vs. the other? Thanks
|
OpenShift Hub Creation vs Create New Application I just created an OpenShift free account. When I logged in, I see there's a link called "Create new application". Also, there's an OpenShift Hub where you deploy. What's the difference between "Create new application" vs. creating it via OpenShift Hub? When do you use one way vs. the other? Thanks
|
openshift, redhat
| 2
| 46
| 1
|
https://stackoverflow.com/questions/30206034/openshift-hub-creation-vs-create-new-application
|
29,183,519
|
Kerberos master-slave setup : Database propagation, and KDC & KADMIN switching
|
I am trying to setup Kerberos on Redhat with slaves and database propagation (not incremental). I am going through MIT's documentation for KDC installation and configuration . Currently, I have three doubts/issues: Do we need kpropd running on slave KDC, even if we do not have incremental propagation ? I started xinetd service, and tried propagating database (without starting kpropd, as I have not configured incremental propagation), and it gave me an error: kprop: Connection refused while connecting to server However, when I started kpropd in the same setup without any configuration change, I was able to successfully propagate the database. As per the document , it says [Re]start inetd daemon. Alternatively, start kpropd as a stand-alone daemon. This is required when incremental propagation is enabled. I went through MIT's Troubleshooting page as well, and it said the same, i.e. inetd can run kprop. My inetd.conf: krb5_prop stream tcp nowait root /usr/sbin/kpropd kpropd Do we need to add Kerberos Administration Server (admin_server) for slave KDC in krb5.conf? OR In other words, can we have more than one admin_server properties configured in krb5.conf? Since we are configuring a master-slave setup and can switch to a slave KDC creating it a new master at any point of time. We would need to start a Kerberos Administration Server (kadmind) on the new master, as well. Do we need to have hosts for both the admin servers listed in the krb5.conf file? I tried adding both the hosts, but it turns out that this property only picks the last configured one. My krb5.conf looks like: [libdefaults] default_realm = KRB.MY.DOMAIN dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 1h renew_lifetime = 2h forwardable = true [realms] KRB.MY.DOMAIN = { kdc = old-master-host.my.domain kdc = new-master-host.my.domain admin_server = old-master-host.my.domain admin_server = new-master-host.my.domain } [domain_realm] .my.domain = KRB.MY.DOMAIN In such a case, admin server would be looked only at new-master-host.my.domain , even if it is running on old-master-host.my.domain . Can we start Kerberos Administration Server on a slave KDC machine, as specified in MIT documentation ? I tried starting Kerberos Administration Server (kadmind) on my new master and I got an error: Error. This appears to be a slave server, found kpropd.acl Is it not advisable to start the Administration server on the slave machine or do we have to [re]move the kpropd.acl file before we can start Administration server? I would really appreciate any pointers or help.
|
Kerberos master-slave setup : Database propagation, and KDC & KADMIN switching I am trying to setup Kerberos on Redhat with slaves and database propagation (not incremental). I am going through MIT's documentation for KDC installation and configuration . Currently, I have three doubts/issues: Do we need kpropd running on slave KDC, even if we do not have incremental propagation ? I started xinetd service, and tried propagating database (without starting kpropd, as I have not configured incremental propagation), and it gave me an error: kprop: Connection refused while connecting to server However, when I started kpropd in the same setup without any configuration change, I was able to successfully propagate the database. As per the document , it says [Re]start inetd daemon. Alternatively, start kpropd as a stand-alone daemon. This is required when incremental propagation is enabled. I went through MIT's Troubleshooting page as well, and it said the same, i.e. inetd can run kprop. My inetd.conf: krb5_prop stream tcp nowait root /usr/sbin/kpropd kpropd Do we need to add Kerberos Administration Server (admin_server) for slave KDC in krb5.conf? OR In other words, can we have more than one admin_server properties configured in krb5.conf? Since we are configuring a master-slave setup and can switch to a slave KDC creating it a new master at any point of time. We would need to start a Kerberos Administration Server (kadmind) on the new master, as well. Do we need to have hosts for both the admin servers listed in the krb5.conf file? I tried adding both the hosts, but it turns out that this property only picks the last configured one. My krb5.conf looks like: [libdefaults] default_realm = KRB.MY.DOMAIN dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 1h renew_lifetime = 2h forwardable = true [realms] KRB.MY.DOMAIN = { kdc = old-master-host.my.domain kdc = new-master-host.my.domain admin_server = old-master-host.my.domain admin_server = new-master-host.my.domain } [domain_realm] .my.domain = KRB.MY.DOMAIN In such a case, admin server would be looked only at new-master-host.my.domain , even if it is running on old-master-host.my.domain . Can we start Kerberos Administration Server on a slave KDC machine, as specified in MIT documentation ? I tried starting Kerberos Administration Server (kadmind) on my new master and I got an error: Error. This appears to be a slave server, found kpropd.acl Is it not advisable to start the Administration server on the slave machine or do we have to [re]move the kpropd.acl file before we can start Administration server? I would really appreciate any pointers or help.
|
linux, centos, redhat, kerberos, mit-kerberos
| 2
| 3,656
| 1
|
https://stackoverflow.com/questions/29183519/kerberos-master-slave-setup-database-propagation-and-kdc-kadmin-switching
|
27,434,435
|
Using Fault Injection on redhat 6.5
|
I simulate the fault disk using Fauly Injection ( [URL] ). But the /sys/kernel/debug/fail_make_request/ path don't exist. So how can I enable or install it?
|
Using Fault Injection on redhat 6.5 I simulate the fault disk using Fauly Injection ( [URL] ). But the /sys/kernel/debug/fail_make_request/ path don't exist. So how can I enable or install it?
|
linux-kernel, redhat
| 2
| 311
| 1
|
https://stackoverflow.com/questions/27434435/using-fault-injection-on-redhat-6-5
|
27,368,046
|
CMake install can't find library and points to wrong compiler version
|
linux noob here- will appreciate any kind of help. Some background: I'm trying to build a program from source on RHEL 6.5, the dependencies for this program are specifically: GCC 4.7 and above (for C++ 11 support) CMake 2.8.9+ we already had GCC 4.4.7 installed in /usr/libexec/gcc, so our linux person built and installed the new version in /usr/local/libexec/gcc (version 4.9) We didn't have CMake so I installed in from scratch by unizipping the source in /usr/local and following the directions from here: [URL] ./bootstrap make make install so far so good and in the CMakeOutput.log of the CMake it is correctly pointing to the new GCC's path, [COMPILER PATH=/usr/local/libexec/gcc/.../4.9.2/ and I did have to copy a .so file from /usr/lib64 to /usr/local/lib64 in order to successfully bootstrap/make it but I don't think that's the source of my problem. The Problem: Now here's what i'm having trouble with: so when I finally try to build this program using "cmake ." I get the following issues: -- The C compiler identification is GNU 4.4.7 -- Performing Test COMPILER_SUPPORTS_CXX11 - Failed The compiler identification should be version 4.9 and the Test should've succeeded but it did not... -- Could NOT find ZLIB (missing: ZLIB_LIBRARY ZLIB_INCLUDE_DIR) -- Could NOT find PNG (missing: PNG_LIBRARY PNG_PNG INCLUDE_ DIR) Cmake has the FindPNG cmake module file in /usr/local/cmake-3.0.2/Modules but it doesn't seem to know where it is, I tried copying just the FindPNG.cmake file into the local cmake directory of the program and I just kept getting missing module files one after another... Now- I think all these errors could just be a result of something not pointing to something correctly, maybe not setting environment variables for something, missing or wrong CMake commands / variables in the CMakeList file or whatever but I have spent a quite amount of time trying to fix it trying different approaches but just couldn't figure it out...any help will be greatly appreciated!!! Here's the top level CMakeLists.txt of the program I'm trying to build: cmake_minimum_required(VERSION 2.8) project(COLLADA2GLTF) if (NOT WIN32) #[URL] include(CheckCXXCompilerFlag) CHECK_CXX_COMPILER_FLAG("-std=c++11" COMPILER_SUPPORTS_CXX11) CHECK_CXX_COMPILER_FLAG("-std=c++0x" COMPILER_SUPPORTS_CXX0X) if(COMPILER_SUPPORTS_CXX11) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11") message("-- C++11 Enabled") elseif(COMPILER_SUPPORTS_CXX0X) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x") message("-- C++0x Enabled") else() message(STATUS "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a different C++ compiler.") endif() endif() set(USE_OPEN3DGC "ON") set(WITH_IN_SOURCE_BUILD "ON") set(COLLADA2GLTF_BINARY_DIR, COLLADA2GLTF_SOURCE_DIR) set(BUILD_SHARED_LIBS "OFF") list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake") include(GetGitRevisionDescription) get_git_head_revision(GIT_REFSPEC GIT_SHA1) configure_file("${CMAKE_CURRENT_SOURCE_DIR}/GitSHA1.cpp.in" "${CMAKE_CURRENT_BINARY_DIR}/GitSHA1.cpp" @ONLY) set(TARGET_LIBS GeneratedSaxParser_static OpenCOLLADABaseUtils_static UTF_static ftoa_static MathMLSolver_static OpenCOLLADASaxFrameworkLoader_static OpenCOLLADAFramework_static buffer_static) if (NOT WIN32) set(CMAKE_FIND_LIBRARY_SUFFIXES .so .a .dylib) endif() # Lets libxml2 work in a shared library add_definitions(-DLIBXML_STATIC_FOR_DLL) IF(CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64") ADD_DEFINITIONS(-fPIC) ENDIF(CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64") include_directories(${COLLADA2GLTF_SOURCE_DIR}) ....... include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/OpenCOLLADA/COLLADABaseUtils/include) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/OpenCOLLADA/COLLADASaxFrameworkLoader/include) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/OpenCOLLADA/GeneratedSaxParser/include) if (WIN32) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/misc) endif() if (USE_OPEN3DGC) add_definitions( -DUSE_OPEN3DGC ) include_directories(${COLLADA2GLTF_SOURCE_DIR}/extensions/o3dgc-compression) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/o3dgc/src) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/o3dgc/src/o3dgc_common_lib/inc) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/o3dgc/src/o3dgc_encode_lib/inc) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/o3dgc/src/o3dgc_decode_lib/inc) endif() find_package(PNG) if (PNG_FOUND) include_directories(${PNG_INCLUDE_DIR}) include_directories(${ZLIB_INCLUDE_DIR}) add_definitions(-DUSE_LIBPNG) else() message(WARNING "libpng or one of its dependencies couldn't be found. Transparency may not be correctly detected.") endif() link_directories(${COLLADA2GLTF_BINARY_DIR}/lib) if (WIN32) add_definitions(-D_CRT_SECURE_NO_WARNINGS) add_definitions(-DWIN32) add_definitions(-EHsc) endif() add_subdirectory(dependencies/OpenCOLLADA) if (USE_OPEN3DGC) add_subdirectory(dependencies/o3dgc/src) endif() set(GLTF_SOURCES COLLADA2GLTFWriter.h COLLADA2GLTFWriter.cpp ...... assetModifiers/GLTFFlipUVModifier.cpp ${CMAKE_CURRENT_BINARY_DIR}/GitSHA1.cpp GitSHA1.h) if (USE_OPEN3DGC) LIST(APPEND GLTF_SOURCES extensions/o3dgc-compression/GLTF-Open3DGC.cpp extensions/o3dgc-compression/GLTF-Open3DGC.h) endif() option(CONVERT_SHARED "CONVERT_SHARED" OFF) if (CONVERT_SHARED) add_library(collada2gltfConvert SHARED ${GLTF_SOURCES}) #Make sure the dll is in the same directory as the executable if (WIN32) set_target_properties(collada2gltfConvert PROPERTIES RUNTIME_OUTPUT_DIRECTORY "bin") endif() else() add_library(collada2gltfConvert STATIC ${GLTF_SOURCES}) add_definitions(-DSTATIC_COLLADA2GLTF) endif() if (PNG_FOUND) LIST(APPEND TARGET_LIBS ${PNG_LIBRARY} ${ZLIB_LIBRARY}) endif() if (USE_OPEN3DGC) LIST(APPEND TARGET_LIBS o3dgc_common_lib o3dgc_enc_lib o3dgc_dec_lib) endif() IF("${CMAKE_SYSTEM}" MATCHES "Linux") LIST(APPEND TARGET_LIBS rt) endif("${CMAKE_SYSTEM}" MATCHES "Linux") target_link_libraries (collada2gltfConvert ${TARGET_LIBS}) set(GLTF_EXE_SOURCES main.cpp ${CMAKE_CURRENT_BINARY_DIR}/GitSHA1.cpp GitSHA1.h) if (WIN32) LIST(APPEND GLTF_EXE_SOURCES ${COLLADA2GLTF_SOURCE_DIR}/dependencies/misc/getopt_long.c ${COLLADA2GLTF_SOURCE_DIR}/dependencies/misc/getopt.c ${COLLADA2GLTF_SOURCE_DIR}/dependencies/misc/getopt.h) endif() add_executable(collada2gltf ${GLTF_EXE_SOURCES}) target_link_libraries (collada2gltf collada2gltfConvert)
|
CMake install can't find library and points to wrong compiler version linux noob here- will appreciate any kind of help. Some background: I'm trying to build a program from source on RHEL 6.5, the dependencies for this program are specifically: GCC 4.7 and above (for C++ 11 support) CMake 2.8.9+ we already had GCC 4.4.7 installed in /usr/libexec/gcc, so our linux person built and installed the new version in /usr/local/libexec/gcc (version 4.9) We didn't have CMake so I installed in from scratch by unizipping the source in /usr/local and following the directions from here: [URL] ./bootstrap make make install so far so good and in the CMakeOutput.log of the CMake it is correctly pointing to the new GCC's path, [COMPILER PATH=/usr/local/libexec/gcc/.../4.9.2/ and I did have to copy a .so file from /usr/lib64 to /usr/local/lib64 in order to successfully bootstrap/make it but I don't think that's the source of my problem. The Problem: Now here's what i'm having trouble with: so when I finally try to build this program using "cmake ." I get the following issues: -- The C compiler identification is GNU 4.4.7 -- Performing Test COMPILER_SUPPORTS_CXX11 - Failed The compiler identification should be version 4.9 and the Test should've succeeded but it did not... -- Could NOT find ZLIB (missing: ZLIB_LIBRARY ZLIB_INCLUDE_DIR) -- Could NOT find PNG (missing: PNG_LIBRARY PNG_PNG INCLUDE_ DIR) Cmake has the FindPNG cmake module file in /usr/local/cmake-3.0.2/Modules but it doesn't seem to know where it is, I tried copying just the FindPNG.cmake file into the local cmake directory of the program and I just kept getting missing module files one after another... Now- I think all these errors could just be a result of something not pointing to something correctly, maybe not setting environment variables for something, missing or wrong CMake commands / variables in the CMakeList file or whatever but I have spent a quite amount of time trying to fix it trying different approaches but just couldn't figure it out...any help will be greatly appreciated!!! Here's the top level CMakeLists.txt of the program I'm trying to build: cmake_minimum_required(VERSION 2.8) project(COLLADA2GLTF) if (NOT WIN32) #[URL] include(CheckCXXCompilerFlag) CHECK_CXX_COMPILER_FLAG("-std=c++11" COMPILER_SUPPORTS_CXX11) CHECK_CXX_COMPILER_FLAG("-std=c++0x" COMPILER_SUPPORTS_CXX0X) if(COMPILER_SUPPORTS_CXX11) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11") message("-- C++11 Enabled") elseif(COMPILER_SUPPORTS_CXX0X) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x") message("-- C++0x Enabled") else() message(STATUS "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a different C++ compiler.") endif() endif() set(USE_OPEN3DGC "ON") set(WITH_IN_SOURCE_BUILD "ON") set(COLLADA2GLTF_BINARY_DIR, COLLADA2GLTF_SOURCE_DIR) set(BUILD_SHARED_LIBS "OFF") list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake") include(GetGitRevisionDescription) get_git_head_revision(GIT_REFSPEC GIT_SHA1) configure_file("${CMAKE_CURRENT_SOURCE_DIR}/GitSHA1.cpp.in" "${CMAKE_CURRENT_BINARY_DIR}/GitSHA1.cpp" @ONLY) set(TARGET_LIBS GeneratedSaxParser_static OpenCOLLADABaseUtils_static UTF_static ftoa_static MathMLSolver_static OpenCOLLADASaxFrameworkLoader_static OpenCOLLADAFramework_static buffer_static) if (NOT WIN32) set(CMAKE_FIND_LIBRARY_SUFFIXES .so .a .dylib) endif() # Lets libxml2 work in a shared library add_definitions(-DLIBXML_STATIC_FOR_DLL) IF(CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64") ADD_DEFINITIONS(-fPIC) ENDIF(CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64") include_directories(${COLLADA2GLTF_SOURCE_DIR}) ....... include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/OpenCOLLADA/COLLADABaseUtils/include) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/OpenCOLLADA/COLLADASaxFrameworkLoader/include) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/OpenCOLLADA/GeneratedSaxParser/include) if (WIN32) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/misc) endif() if (USE_OPEN3DGC) add_definitions( -DUSE_OPEN3DGC ) include_directories(${COLLADA2GLTF_SOURCE_DIR}/extensions/o3dgc-compression) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/o3dgc/src) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/o3dgc/src/o3dgc_common_lib/inc) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/o3dgc/src/o3dgc_encode_lib/inc) include_directories(${COLLADA2GLTF_SOURCE_DIR}/dependencies/o3dgc/src/o3dgc_decode_lib/inc) endif() find_package(PNG) if (PNG_FOUND) include_directories(${PNG_INCLUDE_DIR}) include_directories(${ZLIB_INCLUDE_DIR}) add_definitions(-DUSE_LIBPNG) else() message(WARNING "libpng or one of its dependencies couldn't be found. Transparency may not be correctly detected.") endif() link_directories(${COLLADA2GLTF_BINARY_DIR}/lib) if (WIN32) add_definitions(-D_CRT_SECURE_NO_WARNINGS) add_definitions(-DWIN32) add_definitions(-EHsc) endif() add_subdirectory(dependencies/OpenCOLLADA) if (USE_OPEN3DGC) add_subdirectory(dependencies/o3dgc/src) endif() set(GLTF_SOURCES COLLADA2GLTFWriter.h COLLADA2GLTFWriter.cpp ...... assetModifiers/GLTFFlipUVModifier.cpp ${CMAKE_CURRENT_BINARY_DIR}/GitSHA1.cpp GitSHA1.h) if (USE_OPEN3DGC) LIST(APPEND GLTF_SOURCES extensions/o3dgc-compression/GLTF-Open3DGC.cpp extensions/o3dgc-compression/GLTF-Open3DGC.h) endif() option(CONVERT_SHARED "CONVERT_SHARED" OFF) if (CONVERT_SHARED) add_library(collada2gltfConvert SHARED ${GLTF_SOURCES}) #Make sure the dll is in the same directory as the executable if (WIN32) set_target_properties(collada2gltfConvert PROPERTIES RUNTIME_OUTPUT_DIRECTORY "bin") endif() else() add_library(collada2gltfConvert STATIC ${GLTF_SOURCES}) add_definitions(-DSTATIC_COLLADA2GLTF) endif() if (PNG_FOUND) LIST(APPEND TARGET_LIBS ${PNG_LIBRARY} ${ZLIB_LIBRARY}) endif() if (USE_OPEN3DGC) LIST(APPEND TARGET_LIBS o3dgc_common_lib o3dgc_enc_lib o3dgc_dec_lib) endif() IF("${CMAKE_SYSTEM}" MATCHES "Linux") LIST(APPEND TARGET_LIBS rt) endif("${CMAKE_SYSTEM}" MATCHES "Linux") target_link_libraries (collada2gltfConvert ${TARGET_LIBS}) set(GLTF_EXE_SOURCES main.cpp ${CMAKE_CURRENT_BINARY_DIR}/GitSHA1.cpp GitSHA1.h) if (WIN32) LIST(APPEND GLTF_EXE_SOURCES ${COLLADA2GLTF_SOURCE_DIR}/dependencies/misc/getopt_long.c ${COLLADA2GLTF_SOURCE_DIR}/dependencies/misc/getopt.c ${COLLADA2GLTF_SOURCE_DIR}/dependencies/misc/getopt.h) endif() add_executable(collada2gltf ${GLTF_EXE_SOURCES}) target_link_libraries (collada2gltf collada2gltfConvert)
|
linux, gcc, cmake, redhat
| 2
| 4,611
| 1
|
https://stackoverflow.com/questions/27368046/cmake-install-cant-find-library-and-points-to-wrong-compiler-version
|
27,132,432
|
How to debug %post with rpmbuild
|
I'm building an RPM that needs to run a number of scripts to configure it after it's been installed to complete the installation. I have to run the scripts in the %post section because the configuration is dependent upon the type of host. All this is fairly easy and well, but every time I run into a bug with the %post section, I have to rebuild the entire package which takes about 20 minutes. Is there a way to skip recompiling everything and just build a new package with just the changes from %post?
|
How to debug %post with rpmbuild I'm building an RPM that needs to run a number of scripts to configure it after it's been installed to complete the installation. I have to run the scripts in the %post section because the configuration is dependent upon the type of host. All this is fairly easy and well, but every time I run into a bug with the %post section, I have to rebuild the entire package which takes about 20 minutes. Is there a way to skip recompiling everything and just build a new package with just the changes from %post?
|
redhat, rpmbuild
| 2
| 1,916
| 2
|
https://stackoverflow.com/questions/27132432/how-to-debug-post-with-rpmbuild
|
25,975,529
|
Setup and Debugging of applications run under mod-mono-server4
|
I have a c# application (servicebus) which runs on a private web server. Its basic job is to accept some web requests and create other processes to handle processing the data packages described in the requests. The processing is often ongoing and can take weeks. The servicebus will, occasionally, start consuming great amounts of CPU. That is, it is normally idle, getting 1 or 2 seconds of CPU time per day. When it gets into this strange mode, its consuming 100+% CPU all the time. At this point, a new instance of the servicebus gets spawned by apache if a new request comes in. At this point I will have two copies of the servicebus running (and possibly both handling processing requests -- i don't know). This is the normal process (via ps -aef ) : UID PID PPID C STIME TTY TIME CMD apache 8978 1 0 11:51 ? 00:00:01 /opt/mono/bin/mono /opt/mono/lib/mono/4.0/mod-mono-server4.exe --filename /tmp/mod_mono_server_default --applications /:/opt/ov/vespa/servicebus --nonstop As you can see, the application is a C# program (compiled with VS 2010 for .NET 4) running via mod-mono-server4 under mono. This is a redhat linux enterprise 6.5 system. After running for a while that process 'went crazy' and started consuming lots of CPU and mod-mono-server created a new instance. As you can see, I didn't find it until Monday morning after it had used over 2 days of CPU time. Here is the new ps -aef output : UID PID PPID C STIME TTY TIME CMD apache 8978 1 83 Sep19 ? 2-08:26:25 /opt/mono/bin/mono /opt/mono/lib/mono/4.0/mod-mono-server4.exe --filename /tmp/mod_mono_server_default --applications /:/opt/ov/vespa/servicebus --nonstop apache 32538 1 0 Sep21 ? 00:00:00 /opt/mono/bin/mono /opt/mono/lib/mono/4.0/mod-mono-server4.exe --filename /tmp/mod_mono_server_default --applications /:/opt/ov/vespa/servicebus --nonstop In case you need to see how the application is configured, I have the snippet from the conf.d file for the application : # The user and group need to be set before mod_mono.conf is loaded. User apache Group apache # Service Bus setup Include /etc/httpd/conf/mod_mono.conf Listen 8081 <VirtualHost *:8081> DocumentRoot /opt/ov/vespa/servicebus MonoServerPath default /opt/mono/bin/mod-mono-server4 MonoApplications "/:/opt/ov/vespa/servicebus" <Location "/"> SetHandler mono Allow from all </Location> </VirtualHost> The basic question is... how do I go about debugging this and finding what is wrong with my application? That, however is a bit vague. Normally, I would want to put mono into debug mod and then when it gets into this strange mode I would use kill -ABRT to get a core dump out of it. I assume I could then find a for loop/while loop/etc which is stuck and fix my bug. So, the real question is how do do that? Is that process PID=8978 actually my application being interpreted by mono or is it mono running mod-mono-server4.exe? Or is it mono interpreting mod-mono-server4.exe which in turn is interpreting servicebus? Where in the apache configuration files do I put in the arguments to mono so I can get the --debug I desire. Normally to debug I would need a process like : /opt/mono/bin/mono --debug /opt/test/testapp.exe So, I need to get a --debug into the command line and sort out which PID to actually kill. Then I can use techniques from [URL] to debug the core file. NOTE: I have tried putting in MonoMaxCPUTime and MonoAutoRestartTime directives into the apache conf files to cure this. The problem is, when everything is nominal, they work fine. Once it gets into this bad state(consuming a ton of CPU), the restart fails. Or rather it succeeds in creating a new process but fails to delete the old one (basically the state I am already in). Debugging so far : I see my log files for PID=8979 stops on 9/21 at 03:27. Given that it often generates a 200% or 300% CPU or more that could easily be the time of the 'crash'. Looking in the apache logs I found an unusual event at that time. A dump of the log is below : ... [Sun Sep 21 03:28:01 2014] [notice] SIGHUP received. Attempting to restart mod-mono-server received a shutdown message httpd: Could not reliably determine the server's fully qualified domain name, using localhost.localdomain for ServerName Stacktrace: Native stacktrace: /opt/mono/bin/mono() [0x48cc26] /lib64/libpthread.so.0() [0x32fca0f710] /lib64/libpthread.so.0(pthread_cond_wait+0xcc) [0x32fca0b5bc] /opt/mono/bin/mono() [0x5a6a9c] /opt/mono/bin/mono() [0x5ad4e9] /opt/mono/bin/mono() [0x5116d8] /opt/mono/bin/mono(mono_thread_manage+0x1ad) [0x5161cd] /opt/mono/bin/mono(mono_main+0x1401) [0x46a671] /lib64/libc.so.6(__libc_start_main+0xfd) [0x32fc21ed1d] /opt/mono/bin/mono() [0x4123a9] Debug info from gdb: warning: File "/opt/mono/bin/mono-gdb.py" auto-loading has been declined by your `auto-load safe-path' set to "/usr/share/gdb/auto-load:/usr/lib/debug:/usr/bin/mono-gdb.py". To enable execution of this file add add-auto-load-safe-path /opt/mono/bin/mono-gdb.py line to your configuration file "$HOME/.gdbinit". To completely disable this security protection add set auto-load safe-path / line to your configuration file "$HOME/.gdbinit". For more information about this security protection see the "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: info "(gdb)Auto-loading safe path" [New LWP 9148] [New LWP 9135] [New LWP 9000] [New LWP 8991] [New LWP 8990] [New LWP 8988] [New LWP 8987] [New LWP 8986] [New LWP 8985] [New LWP 8984] [Thread debugging using libthread_db enabled] 0x00000032fca0e75d in read () from /lib64/libpthread.so.0 11 Thread 0x7f0d8bcaf700 (LWP 8984) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 10 Thread 0x7f0d8b2ae700 (LWP 8985) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 9 Thread 0x7f0d8a8ad700 (LWP 8986) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 8 Thread 0x7f0d89eac700 (LWP 8987) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 7 Thread 0x7f0d894ab700 (LWP 8988) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 6 Thread 0x7f0d88aaa700 (LWP 8990) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 5 Thread 0x7f0d880a9700 (LWP 8991) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 4 Thread 0x7f0d8713c700 (LWP 9000) 0x00000032fca0d930 in sem_wait () from /lib64/libpthread.so.0 3 Thread 0x7f0d86157700 (LWP 9135) 0x00000032fc27a983 in malloc () from /lib64/libc.so.6 2 Thread 0x7f0d8568b700 (LWP 9148) 0x00000032fc2792f0 in _int_malloc () from /lib64/libc.so.6 * 1 Thread 0x7f0d8bcb0740 (LWP 8978) 0x00000032fca0e75d in read () from /lib64/libpthread.so.0 Thread 11 (Thread 0x7f0d8bcaf700 (LWP 8984)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 10 (Thread 0x7f0d8b2ae700 (LWP 8985)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 9 (Thread 0x7f0d8a8ad700 (LWP 8986)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 8 (Thread 0x7f0d89eac700 (LWP 8987)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 7 (Thread 0x7f0d894ab700 (LWP 8988)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 6 (Thread 0x7f0d88aaa700 (LWP 8990)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 5 (Thread 0x7f0d880a9700 (LWP 8991)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 4 (Thread 0x7f0d8713c700 (LWP 9000)): #0 0x00000032fca0d930 in sem_wait () from /lib64/libpthread.so.0 #1 0x00000000005bea28 in mono_sem_wait () #2 0x000000000053b2bb in finalizer_thread () #3 0x000000000051375b in start_wrapper () #4 0x00000000005a8214 in thread_start_routine () #5 0x00000000005d565a in GC_start_routine () #6 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #7 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 3 (Thread 0x7f0d86157700 (LWP 9135)): #0 0x00000032fc27a983 in malloc () from /lib64/libc.so.6 #1 0x00000000005cd0e6 in monoeg_malloc () #2 0x00000000005cbef1 in monoeg_g_hash_table_insert_replace () #3 0x00000000005acff5 in WaitForMultipleObjectsEx () #4 0x0000000000512694 in ves_icall_System_Threading_WaitHandle_WaitAny_internal () #5 0x00000000417b0270 in ?? () #6 0x00007f0d68000c21 in ?? () #7 0x00007f0d847c4b40 in ?? () #8 0x00007f0d68003e00 in ?? () #9 0x000000004023e890 in ?? () #10 0x00007f0d68003e00 in ?? () #11 0x00007f0d86156940 in ?? () #12 0x00007f0d861568a0 in ?? () #13 0x00007f0d8767d000 in ?? () #14 0xffffffffffffffff in ?? () #15 0x00007f0d86156cc0 in ?? () #16 0x00007f0d847c4b40 in ?? () #17 0x000000004023e268 in ?? () #18 0x0000000000000000 in ?? () Thread 2 (Thread 0x7f0d8568b700 (LWP 9148)): #0 0x00000032fc2792f0 in _int_malloc () from /lib64/libc.so.6 #1 0x00000032fc27a636 in calloc () from /lib64/libc.so.6 #2 0x00000000005cd148 in monoeg_malloc0 () #3 0x00000000005cbb94 in monoeg_g_hash_table_new () #4 0x00000000005acf94 in WaitForMultipleObjectsEx () #5 0x0000000000512694 in ves_icall_System_Threading_WaitHandle_WaitAny_internal () #6 0x00000000417b0270 in ?? () #7 0x00007f0d60000c21 in ?? () #8 0x00007f0d8767d000 in ?? () #9 0xffffffffffffffff in ?? () #10 0x000000004023e890 in ?? () #11 0x00007f0d68003e00 in ?? () #12 0x00007f0d8568a940 in ?? () #13 0x00007f0d8568a8a0 in ?? () #14 0x00007f0d8767d000 in ?? () #15 0xffffffffffffffff in ?? () #16 0x00007f0d8568acc0 in ?? () #17 0x00007f0d864e2990 in ?? () #18 0x000000004023e268 in ?? () #19 0x0000000000000000 in ?? () Thread 1 (Thread 0x7f0d8bcb0740 (LWP 8978)): #0 0x00000032fca0e75d in read () from /lib64/libpthread.so.0 #1 0x000000000048cdb6 in mono_handle_native_sigsegv () #2 <signal handler called> #3 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #4 0x00000000005a6a9c in _wapi_handle_timedwait_signal_handle () #5 0x00000000005ad4e9 in WaitForMultipleObjectsEx () #6 0x00000000005116d8 in wait_for_tids () #7 0x00000000005161cd in mono_thread_manage () #8 0x000000000046a671 in mono_main () #9 0x00000032fc21ed1d in __libc_start_main () from /lib64/libc.so.6 #10 0x00000000004123a9 in _start () ================================================================= Got a SIGABRT while executing native code. This usually indicates a fatal error in the mono runtime or one of the native libraries used by your application. ================================================================= Which I think means the process had a seg fault and was trying to dump core or something and stuck trying to do that? Or did it get a sig ABRT while processing a sig SEGV? In either case, that's a dump of mono, right? I did a find of the full file system and no core was generated so I'm not sure how apache/gdb managed this. In case it matters I have RedHat 6.5, mono 2.10.8, gcc 4.4.7, mod-mono-server4.exe 2.10.0.0 Basically this boils down to these questions. How do I get --debug into the mono commands that apache issues? How do I get apache to save the core files it encounters instead of automatically running gdb on them (as I need to issue more complex commands to get at the underlying c# code)? What does the command line for my servicebus mean? That is why/how come the mod-mono-server4 isn't a completely separate process from my servicebus? How does the MMS fit into the mono interpreting servicebus processing chain Or am I totally wrong and will the answers to those questions not help me?
|
Setup and Debugging of applications run under mod-mono-server4 I have a c# application (servicebus) which runs on a private web server. Its basic job is to accept some web requests and create other processes to handle processing the data packages described in the requests. The processing is often ongoing and can take weeks. The servicebus will, occasionally, start consuming great amounts of CPU. That is, it is normally idle, getting 1 or 2 seconds of CPU time per day. When it gets into this strange mode, its consuming 100+% CPU all the time. At this point, a new instance of the servicebus gets spawned by apache if a new request comes in. At this point I will have two copies of the servicebus running (and possibly both handling processing requests -- i don't know). This is the normal process (via ps -aef ) : UID PID PPID C STIME TTY TIME CMD apache 8978 1 0 11:51 ? 00:00:01 /opt/mono/bin/mono /opt/mono/lib/mono/4.0/mod-mono-server4.exe --filename /tmp/mod_mono_server_default --applications /:/opt/ov/vespa/servicebus --nonstop As you can see, the application is a C# program (compiled with VS 2010 for .NET 4) running via mod-mono-server4 under mono. This is a redhat linux enterprise 6.5 system. After running for a while that process 'went crazy' and started consuming lots of CPU and mod-mono-server created a new instance. As you can see, I didn't find it until Monday morning after it had used over 2 days of CPU time. Here is the new ps -aef output : UID PID PPID C STIME TTY TIME CMD apache 8978 1 83 Sep19 ? 2-08:26:25 /opt/mono/bin/mono /opt/mono/lib/mono/4.0/mod-mono-server4.exe --filename /tmp/mod_mono_server_default --applications /:/opt/ov/vespa/servicebus --nonstop apache 32538 1 0 Sep21 ? 00:00:00 /opt/mono/bin/mono /opt/mono/lib/mono/4.0/mod-mono-server4.exe --filename /tmp/mod_mono_server_default --applications /:/opt/ov/vespa/servicebus --nonstop In case you need to see how the application is configured, I have the snippet from the conf.d file for the application : # The user and group need to be set before mod_mono.conf is loaded. User apache Group apache # Service Bus setup Include /etc/httpd/conf/mod_mono.conf Listen 8081 <VirtualHost *:8081> DocumentRoot /opt/ov/vespa/servicebus MonoServerPath default /opt/mono/bin/mod-mono-server4 MonoApplications "/:/opt/ov/vespa/servicebus" <Location "/"> SetHandler mono Allow from all </Location> </VirtualHost> The basic question is... how do I go about debugging this and finding what is wrong with my application? That, however is a bit vague. Normally, I would want to put mono into debug mod and then when it gets into this strange mode I would use kill -ABRT to get a core dump out of it. I assume I could then find a for loop/while loop/etc which is stuck and fix my bug. So, the real question is how do do that? Is that process PID=8978 actually my application being interpreted by mono or is it mono running mod-mono-server4.exe? Or is it mono interpreting mod-mono-server4.exe which in turn is interpreting servicebus? Where in the apache configuration files do I put in the arguments to mono so I can get the --debug I desire. Normally to debug I would need a process like : /opt/mono/bin/mono --debug /opt/test/testapp.exe So, I need to get a --debug into the command line and sort out which PID to actually kill. Then I can use techniques from [URL] to debug the core file. NOTE: I have tried putting in MonoMaxCPUTime and MonoAutoRestartTime directives into the apache conf files to cure this. The problem is, when everything is nominal, they work fine. Once it gets into this bad state(consuming a ton of CPU), the restart fails. Or rather it succeeds in creating a new process but fails to delete the old one (basically the state I am already in). Debugging so far : I see my log files for PID=8979 stops on 9/21 at 03:27. Given that it often generates a 200% or 300% CPU or more that could easily be the time of the 'crash'. Looking in the apache logs I found an unusual event at that time. A dump of the log is below : ... [Sun Sep 21 03:28:01 2014] [notice] SIGHUP received. Attempting to restart mod-mono-server received a shutdown message httpd: Could not reliably determine the server's fully qualified domain name, using localhost.localdomain for ServerName Stacktrace: Native stacktrace: /opt/mono/bin/mono() [0x48cc26] /lib64/libpthread.so.0() [0x32fca0f710] /lib64/libpthread.so.0(pthread_cond_wait+0xcc) [0x32fca0b5bc] /opt/mono/bin/mono() [0x5a6a9c] /opt/mono/bin/mono() [0x5ad4e9] /opt/mono/bin/mono() [0x5116d8] /opt/mono/bin/mono(mono_thread_manage+0x1ad) [0x5161cd] /opt/mono/bin/mono(mono_main+0x1401) [0x46a671] /lib64/libc.so.6(__libc_start_main+0xfd) [0x32fc21ed1d] /opt/mono/bin/mono() [0x4123a9] Debug info from gdb: warning: File "/opt/mono/bin/mono-gdb.py" auto-loading has been declined by your `auto-load safe-path' set to "/usr/share/gdb/auto-load:/usr/lib/debug:/usr/bin/mono-gdb.py". To enable execution of this file add add-auto-load-safe-path /opt/mono/bin/mono-gdb.py line to your configuration file "$HOME/.gdbinit". To completely disable this security protection add set auto-load safe-path / line to your configuration file "$HOME/.gdbinit". For more information about this security protection see the "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: info "(gdb)Auto-loading safe path" [New LWP 9148] [New LWP 9135] [New LWP 9000] [New LWP 8991] [New LWP 8990] [New LWP 8988] [New LWP 8987] [New LWP 8986] [New LWP 8985] [New LWP 8984] [Thread debugging using libthread_db enabled] 0x00000032fca0e75d in read () from /lib64/libpthread.so.0 11 Thread 0x7f0d8bcaf700 (LWP 8984) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 10 Thread 0x7f0d8b2ae700 (LWP 8985) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 9 Thread 0x7f0d8a8ad700 (LWP 8986) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 8 Thread 0x7f0d89eac700 (LWP 8987) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 7 Thread 0x7f0d894ab700 (LWP 8988) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 6 Thread 0x7f0d88aaa700 (LWP 8990) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 5 Thread 0x7f0d880a9700 (LWP 8991) 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 4 Thread 0x7f0d8713c700 (LWP 9000) 0x00000032fca0d930 in sem_wait () from /lib64/libpthread.so.0 3 Thread 0x7f0d86157700 (LWP 9135) 0x00000032fc27a983 in malloc () from /lib64/libc.so.6 2 Thread 0x7f0d8568b700 (LWP 9148) 0x00000032fc2792f0 in _int_malloc () from /lib64/libc.so.6 * 1 Thread 0x7f0d8bcb0740 (LWP 8978) 0x00000032fca0e75d in read () from /lib64/libpthread.so.0 Thread 11 (Thread 0x7f0d8bcaf700 (LWP 8984)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 10 (Thread 0x7f0d8b2ae700 (LWP 8985)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 9 (Thread 0x7f0d8a8ad700 (LWP 8986)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 8 (Thread 0x7f0d89eac700 (LWP 8987)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 7 (Thread 0x7f0d894ab700 (LWP 8988)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 6 (Thread 0x7f0d88aaa700 (LWP 8990)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 5 (Thread 0x7f0d880a9700 (LWP 8991)): #0 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000005d59f7 in GC_wait_marker () #2 0x00000000005dbabd in GC_help_marker () #3 0x00000000005d4778 in GC_mark_thread () #4 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 4 (Thread 0x7f0d8713c700 (LWP 9000)): #0 0x00000032fca0d930 in sem_wait () from /lib64/libpthread.so.0 #1 0x00000000005bea28 in mono_sem_wait () #2 0x000000000053b2bb in finalizer_thread () #3 0x000000000051375b in start_wrapper () #4 0x00000000005a8214 in thread_start_routine () #5 0x00000000005d565a in GC_start_routine () #6 0x00000032fca079d1 in start_thread () from /lib64/libpthread.so.0 #7 0x00000032fc2e8b5d in clone () from /lib64/libc.so.6 Thread 3 (Thread 0x7f0d86157700 (LWP 9135)): #0 0x00000032fc27a983 in malloc () from /lib64/libc.so.6 #1 0x00000000005cd0e6 in monoeg_malloc () #2 0x00000000005cbef1 in monoeg_g_hash_table_insert_replace () #3 0x00000000005acff5 in WaitForMultipleObjectsEx () #4 0x0000000000512694 in ves_icall_System_Threading_WaitHandle_WaitAny_internal () #5 0x00000000417b0270 in ?? () #6 0x00007f0d68000c21 in ?? () #7 0x00007f0d847c4b40 in ?? () #8 0x00007f0d68003e00 in ?? () #9 0x000000004023e890 in ?? () #10 0x00007f0d68003e00 in ?? () #11 0x00007f0d86156940 in ?? () #12 0x00007f0d861568a0 in ?? () #13 0x00007f0d8767d000 in ?? () #14 0xffffffffffffffff in ?? () #15 0x00007f0d86156cc0 in ?? () #16 0x00007f0d847c4b40 in ?? () #17 0x000000004023e268 in ?? () #18 0x0000000000000000 in ?? () Thread 2 (Thread 0x7f0d8568b700 (LWP 9148)): #0 0x00000032fc2792f0 in _int_malloc () from /lib64/libc.so.6 #1 0x00000032fc27a636 in calloc () from /lib64/libc.so.6 #2 0x00000000005cd148 in monoeg_malloc0 () #3 0x00000000005cbb94 in monoeg_g_hash_table_new () #4 0x00000000005acf94 in WaitForMultipleObjectsEx () #5 0x0000000000512694 in ves_icall_System_Threading_WaitHandle_WaitAny_internal () #6 0x00000000417b0270 in ?? () #7 0x00007f0d60000c21 in ?? () #8 0x00007f0d8767d000 in ?? () #9 0xffffffffffffffff in ?? () #10 0x000000004023e890 in ?? () #11 0x00007f0d68003e00 in ?? () #12 0x00007f0d8568a940 in ?? () #13 0x00007f0d8568a8a0 in ?? () #14 0x00007f0d8767d000 in ?? () #15 0xffffffffffffffff in ?? () #16 0x00007f0d8568acc0 in ?? () #17 0x00007f0d864e2990 in ?? () #18 0x000000004023e268 in ?? () #19 0x0000000000000000 in ?? () Thread 1 (Thread 0x7f0d8bcb0740 (LWP 8978)): #0 0x00000032fca0e75d in read () from /lib64/libpthread.so.0 #1 0x000000000048cdb6 in mono_handle_native_sigsegv () #2 <signal handler called> #3 0x00000032fca0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #4 0x00000000005a6a9c in _wapi_handle_timedwait_signal_handle () #5 0x00000000005ad4e9 in WaitForMultipleObjectsEx () #6 0x00000000005116d8 in wait_for_tids () #7 0x00000000005161cd in mono_thread_manage () #8 0x000000000046a671 in mono_main () #9 0x00000032fc21ed1d in __libc_start_main () from /lib64/libc.so.6 #10 0x00000000004123a9 in _start () ================================================================= Got a SIGABRT while executing native code. This usually indicates a fatal error in the mono runtime or one of the native libraries used by your application. ================================================================= Which I think means the process had a seg fault and was trying to dump core or something and stuck trying to do that? Or did it get a sig ABRT while processing a sig SEGV? In either case, that's a dump of mono, right? I did a find of the full file system and no core was generated so I'm not sure how apache/gdb managed this. In case it matters I have RedHat 6.5, mono 2.10.8, gcc 4.4.7, mod-mono-server4.exe 2.10.0.0 Basically this boils down to these questions. How do I get --debug into the mono commands that apache issues? How do I get apache to save the core files it encounters instead of automatically running gdb on them (as I need to issue more complex commands to get at the underlying c# code)? What does the command line for my servicebus mean? That is why/how come the mod-mono-server4 isn't a completely separate process from my servicebus? How does the MMS fit into the mono interpreting servicebus processing chain Or am I totally wrong and will the answers to those questions not help me?
|
linux, apache, mono, redhat, mod-mono
| 2
| 1,161
| 1
|
https://stackoverflow.com/questions/25975529/setup-and-debugging-of-applications-run-under-mod-mono-server4
|
25,497,543
|
RHEL Virtual Machine - sslv3 alert certificate expired + 403 errors on yum update
|
I've got two RHEL VMs. VM1 and VM2. VM2 started life as VM1 and has since been personalized. VM1 has been left as it is for months. I found that when trying a > yum update on VM1, I would get a message about "sslv3 alert certificate expired" followed by every single repo failing with a 403 forbidden error. At the same time, VM2 is able to yum update just fine. Following another post I found, I run subscription-manager list --consumed to get this: VM2 (working) $ subscription-manager list --consumed +-------------------------------------------+ Consumed Subscriptions +-------------------------------------------+ Subscription Name: Red Hat Enterprise Linux Server, Standard (1-2 sockets) (Unlimited guests) Provides: Oracle Java (for RHEL Server) Red Hat Software Collections Beta (for RHEL Server) Red Hat Enterprise Linux Server Red Hat Beta SKU: RH0192098F3 Contract: 10003384 Account: 1259084 Serial: 3566340574775756298 Pool ID: 8a85f9813a1de8b9013a2f9bfb876377 Active: True Quantity Used: 1 Service Level: STANDARD Service Type: L1-L3 Status Details: Starts: 08/10/12 Ends: 08/10/15 System Type: Virtual VM1 (Not working) $ subscription-manager list --consumed +-------------------------------------------+ Consumed Subscriptions +-------------------------------------------+ Subscription Name: Red Hat Enterprise Linux Server, Standard (1-2 sockets) (Unlimited guests) Provides: Red Hat Software Collections Beta (for RHEL Server) Red Hat Enterprise Linux Server Red Hat Beta SKU: RH0192098F3 Contract: 10003384 Account: 1259084 Serial Number: 7042773144247234664 Active: True Quantity Used: 1 Service Level: Standard Service Type: L1-L3 Starts: 08/10/12 Ends: 08/10/15 I have no idea how these subscriptions are even managed. I've certainly not had to do anything to VM2 in the year or so since I've used it. Can anyone tell from that output what the original VM is missing exactly?
|
RHEL Virtual Machine - sslv3 alert certificate expired + 403 errors on yum update I've got two RHEL VMs. VM1 and VM2. VM2 started life as VM1 and has since been personalized. VM1 has been left as it is for months. I found that when trying a > yum update on VM1, I would get a message about "sslv3 alert certificate expired" followed by every single repo failing with a 403 forbidden error. At the same time, VM2 is able to yum update just fine. Following another post I found, I run subscription-manager list --consumed to get this: VM2 (working) $ subscription-manager list --consumed +-------------------------------------------+ Consumed Subscriptions +-------------------------------------------+ Subscription Name: Red Hat Enterprise Linux Server, Standard (1-2 sockets) (Unlimited guests) Provides: Oracle Java (for RHEL Server) Red Hat Software Collections Beta (for RHEL Server) Red Hat Enterprise Linux Server Red Hat Beta SKU: RH0192098F3 Contract: 10003384 Account: 1259084 Serial: 3566340574775756298 Pool ID: 8a85f9813a1de8b9013a2f9bfb876377 Active: True Quantity Used: 1 Service Level: STANDARD Service Type: L1-L3 Status Details: Starts: 08/10/12 Ends: 08/10/15 System Type: Virtual VM1 (Not working) $ subscription-manager list --consumed +-------------------------------------------+ Consumed Subscriptions +-------------------------------------------+ Subscription Name: Red Hat Enterprise Linux Server, Standard (1-2 sockets) (Unlimited guests) Provides: Red Hat Software Collections Beta (for RHEL Server) Red Hat Enterprise Linux Server Red Hat Beta SKU: RH0192098F3 Contract: 10003384 Account: 1259084 Serial Number: 7042773144247234664 Active: True Quantity Used: 1 Service Level: Standard Service Type: L1-L3 Starts: 08/10/12 Ends: 08/10/15 I have no idea how these subscriptions are even managed. I've certainly not had to do anything to VM2 in the year or so since I've used it. Can anyone tell from that output what the original VM is missing exactly?
|
linux, redhat, rhel
| 2
| 2,298
| 1
|
https://stackoverflow.com/questions/25497543/rhel-virtual-machine-sslv3-alert-certificate-expired-403-errors-on-yum-updat
|
25,450,026
|
Global name 'mapnik' not defined when using Tilestache
|
I set up a TileStache server on Redhat, installing Mapnik 2.2 from source. However, Tilestache is giving me the following error: Traceback (most recent call last): File "/usr/lib64/python2.6/site-packages/gevent/pywsgi.py", line 508, in handle_one_response self.run_application() File "/usr/lib64/python2.6/site-packages/gevent/pywsgi.py", line 494, in run_application self.result = self.application(self.environ, self.start_response) File "/usr/lib/python2.6/site-packages/TileStache/__init__.py", line 381, in __call__ status_code, headers, content = requestHandler2(self.config, path_info, query_string, script_name) File "/usr/lib/python2.6/site-packages/TileStache/__init__.py", line 254, in requestHandler2 status_code, headers, content = layer.getTileResponse(coord, extension) File "/usr/lib/python2.6/site-packages/TileStache/Core.py", line 414, in getTileResponse tile = self.render(coord, format) File "/usr/lib/python2.6/site-packages/TileStache/Core.py", line 500, in render tile = provider.renderTile(width, height, srs, coord) File "/usr/lib/python2.6/site-packages/TileStache/Goodies/Providers/MapnikGrid.py", line 72, in renderTile self.mapnik = mapnik.Map(0, 0) NameError: global name 'mapnik' is not defined Relevant Information: Other posts have suggested changing 'import mapnik' to 'import mapnik2 as mapnik'. But I got the same error message. In other posts originates from TileStace/Mapnik.py, but mine comes from TileStache/Goodies/Providers/MapnikGrid.py. Related Post: Gunicorn fails when using WSGI Question: Does anyone know what could be causing this? Thanks in advance!
|
Global name 'mapnik' not defined when using Tilestache I set up a TileStache server on Redhat, installing Mapnik 2.2 from source. However, Tilestache is giving me the following error: Traceback (most recent call last): File "/usr/lib64/python2.6/site-packages/gevent/pywsgi.py", line 508, in handle_one_response self.run_application() File "/usr/lib64/python2.6/site-packages/gevent/pywsgi.py", line 494, in run_application self.result = self.application(self.environ, self.start_response) File "/usr/lib/python2.6/site-packages/TileStache/__init__.py", line 381, in __call__ status_code, headers, content = requestHandler2(self.config, path_info, query_string, script_name) File "/usr/lib/python2.6/site-packages/TileStache/__init__.py", line 254, in requestHandler2 status_code, headers, content = layer.getTileResponse(coord, extension) File "/usr/lib/python2.6/site-packages/TileStache/Core.py", line 414, in getTileResponse tile = self.render(coord, format) File "/usr/lib/python2.6/site-packages/TileStache/Core.py", line 500, in render tile = provider.renderTile(width, height, srs, coord) File "/usr/lib/python2.6/site-packages/TileStache/Goodies/Providers/MapnikGrid.py", line 72, in renderTile self.mapnik = mapnik.Map(0, 0) NameError: global name 'mapnik' is not defined Relevant Information: Other posts have suggested changing 'import mapnik' to 'import mapnik2 as mapnik'. But I got the same error message. In other posts originates from TileStace/Mapnik.py, but mine comes from TileStache/Goodies/Providers/MapnikGrid.py. Related Post: Gunicorn fails when using WSGI Question: Does anyone know what could be causing this? Thanks in advance!
|
python, redhat, mapnik, tilestache
| 2
| 1,548
| 2
|
https://stackoverflow.com/questions/25450026/global-name-mapnik-not-defined-when-using-tilestache
|
25,385,586
|
netstat returns strange names where I expect to see a port number (such as mcreport (8003 mulberry reporting service) and pago-services2(30002))
|
I'm doing some troubleshooting on a multi-server redhat 6 system that should be communicating over ports 8003 and 30002. However, when I run netstat -ap I see 'mcreport' and 'pago-services2' where I expect to see 8003 and 30002, respectively. Below is an example tcp 0 0 localhost:55821 localhost:mcreport ESTABLISHED 5501/Program1 tcp 0 0 localhost:55816 localhost:mcreport ESTABLISHED 5673/Program2 tcp 0 0 localhost:mcreport localhost:55782 ESTABLISHED 4938/Program3 tcp 0 0 localhost:55796 localhost:mcreport ESTABLISHED 5651/Program4 udp 0 0 localhost:40956 localhost:pago-services2 ESTABLISHED 5501/Program5 udp 0 0 localhost:60156 localhost:pago-services2 ESTABLISHED 5673/Program6 udp 0 0 localhost:56702 localhost:pago-services2 ESTABLISHED 5360/Program7 udp 0 0 localhost:34691 localhost:pago-services2 ESTABLISHED 4935/Program8 udp 0 0 localhost:50566 localhost:pago-services2 ESTABLISHED 5115/Program9 I've tried to figure out what these services are, but all I've been able to determine is that mcreport is "Mulberry Connect Reporting Service" and that the services in question commonly use the ports that they're hogging. Has anyone run into these before? Do you know where I could find more information about them?
|
netstat returns strange names where I expect to see a port number (such as mcreport (8003 mulberry reporting service) and pago-services2(30002)) I'm doing some troubleshooting on a multi-server redhat 6 system that should be communicating over ports 8003 and 30002. However, when I run netstat -ap I see 'mcreport' and 'pago-services2' where I expect to see 8003 and 30002, respectively. Below is an example tcp 0 0 localhost:55821 localhost:mcreport ESTABLISHED 5501/Program1 tcp 0 0 localhost:55816 localhost:mcreport ESTABLISHED 5673/Program2 tcp 0 0 localhost:mcreport localhost:55782 ESTABLISHED 4938/Program3 tcp 0 0 localhost:55796 localhost:mcreport ESTABLISHED 5651/Program4 udp 0 0 localhost:40956 localhost:pago-services2 ESTABLISHED 5501/Program5 udp 0 0 localhost:60156 localhost:pago-services2 ESTABLISHED 5673/Program6 udp 0 0 localhost:56702 localhost:pago-services2 ESTABLISHED 5360/Program7 udp 0 0 localhost:34691 localhost:pago-services2 ESTABLISHED 4935/Program8 udp 0 0 localhost:50566 localhost:pago-services2 ESTABLISHED 5115/Program9 I've tried to figure out what these services are, but all I've been able to determine is that mcreport is "Mulberry Connect Reporting Service" and that the services in question commonly use the ports that they're hogging. Has anyone run into these before? Do you know where I could find more information about them?
|
linux, service, port, redhat, netstat
| 2
| 2,835
| 1
|
https://stackoverflow.com/questions/25385586/netstat-returns-strange-names-where-i-expect-to-see-a-port-number-such-as-mcrep
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.