[Forgot Password]
Login  Register Subscribe

30480

 
 

423868

 
 

252588

 
 

909

 
 

196930

 
 

282

Paid content will be excluded from the download.


Download | Alert*
OVAL

SUSE-SU-2018:1417-1 -- SLES ceph

ID: oval:org.secpod.oval:def:89002231Date: (C)2021-02-26   (M)2022-10-10
Class: PATCHFamily: unix




This update for ceph fixes the following issues: Security issues fixed: - CVE-2018-7262: rgw: malformed http headers can crash rgw . - CVE-2017-16818: User reachable asserts allow for DoS . Bug fixes: - bsc#1061461: OSDs keep generating coredumps after adding new OSD node to cluster. - bsc#1079076: RGW openssl fixes. - bsc#1067088: Upgrade to SES5 restarted all nodes, majority of OSDs aborts during start. - bsc#1056125: Some OSDs are down when doing performance testing on rbd image in EC Pool. - bsc#1087269: allow_ec_overwrites option not in command options list. - bsc#1051598: Fix mountpoint check for systemctl enable --runtime. - bsc#1070357: Zabbix mgr module doesn"t recover from HEALTH_ERR. - bsc#1066502: After upgrading a single OSD from SES 4 to SES 5 the OSDs do not rejoin the cluster. - bsc#1067119: Crushtool decompile creates wrong device entries for not existing / deleted OSDs. - bsc#1060904: Loglevel misleading during keystone authentication. - bsc#1056967: Monitors goes down after pool creation on cluster with 120 OSDs. - bsc#1067705: Issues with RGW Multi-Site Federation between SES5 and RH Ceph Storage 2. - bsc#1059458: Stopping / restarting rados gateway as part of deepsea stage.4 executions causes core-dump of radosgw. - bsc#1087493: Commvault cannot reconnect to storage after restarting haproxy. - bsc#1066182: Container synchronization between two Ceph clusters failed. - bsc#1081600: Crash in civetweb/RGW. - bsc#1054061: NFS-GANESHA service failing while trying to list mountpoint on client. - bsc#1074301: OSDs keep aborting: SnapMapper failed asserts. - bsc#1086340: XFS metadata corruption on rbd-nbd mapped image with journaling feature enabled. - bsc#1080788: fsid mismatch when creating additional OSDs. - bsc#1071386: Metadata spill onto block.slow.

Platform:
SUSE Linux Enterprise Server 12 SP3
Product:
ceph
Reference:
SUSE-SU-2018:1417-1
CVE-2017-16818
CVE-2018-7262
CVE    2
CVE-2017-16818
CVE-2018-7262
CPE    2
cpe:/o:suse:suse_linux_enterprise_server:12:sp3
cpe:/a:ceph:ceph

© SecPod Technologies