本站用于记录日常工作内容,虚拟化云计算,系统运维,数据库DBA,网络与安全。
ESXI Vcenter Failed to power on VM "Admission check failed for memory resource"Posted by Mohammed Raffic on July 23, 2012 in Troubleshooting | 2977 Views When you try to power on the virtual machine in the ESXi host configured with the lower amount of RAM. Generally the ESXi host running in the lab environment. I got the below error when try to power on the virtual machine yesterday. “Failed to power on VM :Could not power on VM : Admission check failed for memory resource See the VMware ESX Resource Management Guide for information on resource management settings. Group vm.80740: Cannot admit VM: Memory admission check failed. Requested reservation: 25046 pages”It looks something that memory resource allocation problem. First I thought to check the resource pool settings But unfortunately, I am not using resource pools in my lab environment. After that I verified the memory information of the ESXi host in the configuration Tab -> Memory. Please refer the be...
您所在的组无权查看该日志
使用 ionice 限制 Xen 虚拟机磁盘 IO作为 VPS 服务商我们需要保证每个 VPS 公平的使用 host(服务器)的资源,避免某个 VPS 因为程序死循环、挂起、滥用等因素 “拖累” 其他 VPS,如果出现这个情况如何临时限制这个 VPS 的磁盘 IO 呢?有个办法是通过通过修改每个虚拟机 CPU 权重的办法间接、不精确的限制 IO. 在 Linux 上限制资源(CPU、内存、IO 等)的通常办法是用 cgroups,不过今天介绍的 ionice 要更容易一些。首先找到哪个虚拟机(VPS)正在大量 IO(假设是 vps0001),找到这个虚拟机后用 xm list 查出这个虚拟机使用的 ID 号,然后用 ID 配上 blkback(blkback.24)找出这个虚拟机(通过 Xen 的 blkback 驱动)关联哪些硬盘(blkback.24.xvda 和 blkback.24.xvdb),以及所使用的进程号(25089 和 25090):# xm list vps0001 Name ID Mem(MiB) VCPUs State time(s) vps0001 24 1024 2 -b---- 70030.7 # ps aux | grep blkback.24 root 7434 0.0 0.1 61172 768 pts/16 D+ 02:48 0:00 grep blkback.24 root 25089 0.0 0.0 0 0 ? ...
case share :R710 win2012 installed the Hyper-V role show:Event ID: 60  Issue On a computer with Windows Server 2012 or Windows 8 withHyper-V role installed or after upgrading from a previous version of Windowswith the Hyper-V role already installed, you may receive the following errormessage while starting a virtual machine: Virtual machine'VM_Name' could not be started because the hypervisor is not running (Virtualmachine ID ). The following actions may helpyou resolve the problem: Verify that the processor of the physical computer has a supported version of hardware-assisted virtualization. Verify that hardware-assisted virtualization and hardware-assisted data execution protection are enabled in the BIOS of the physical computer. (If you edit the BIOS to enable either setting, you must turn off the power to the physical computer and then turn it back on. Resetting the physical computer is not sufficient.) If you have made changes to the Boot Configuration ...
CentOS 7.1 安装分布式存储系统 Ceph关于 Ceph 的介绍网上一大堆,这里就不重复了。Sage Weil 读博士的时候开发了这套牛逼的分布式存储系统,最初是奔着高性能分布式文件系统去的,结果云计算风口一来,Ceph 重心转向了分布式块存储(Block Storage)和分布式对象存储(Object Storage),现在分布式文件系统 CephFS 还停在 beta 阶段。Ceph 现在是云计算、虚拟机部署的最火开源存储解决方案,据说有20%的 OpenStack 部署存储用的都是 Ceph 的 block storage.Ceph 提供3种存储方式:对象存储,块存储和文件系统,下图很好的展示了 Ceph 存储集群的架构:我们主要关心的是块存储,将在下半年慢慢把虚拟机后端存储从 SAN 过渡到 Ceph. 虽然还是 0.94 版本,Ceph 现在已经比较成熟了,有个同事已经在生产环境里运行 Ceph 了两年多,他曾遇到很多问题,但最终还是解决了,可见 Ceph 还是非常稳定和可靠的。硬件环境准备准备了6台机器,其中3台物理服务器做监控节点(mon: ceph-mon1, ceph-mon2, ceph-mon3),2台物理服务器做存储节点(osd: ceph-osd1, ceph-osd2),1台虚拟机做管理节点(adm: ceph-adm)。Ceph 要求必须是奇数个监控节点,而且最少3个(自己玩玩的话,1个也是可...
Understand QoS at OpenSwitch danny http://dannykim.me/danny/57771 2014.02.11 14:34:58 (*.193.128.184) 592 >>> Purpose This document helps us understand how QoS works in OpenVSwitch. This document can be used as a warming up before a good tutorial(URL is shown below at References) to create a floodlight module to handle QoS. >>> Reference - How to create QoS Service Module - man ovs-vsctl (Quality of Service (QoS) section) >>> Outline QoS is the service to differentiate delivery service. In some packets, it is delivered at high speed, but others can be delivered at lower speed. Based on my understanding, key essence of QoS at OpenVSwitch is to create queues with different speed, and put packets into different queues depending on QoS policy. This document does not handle all of them, but just taste how to configure OpenVSwitch for QoS. In detail. we just create a queue with low speed, and QoS use the queue. ...
通过非XenCenter方式连接到XenServer各个VM的VNC本地界面 下面,就是我们在线上所使用的脚本,只需要你本地安装有一个VNC客户端就可以了。获取本地VM列表# ./getvnc1Usage: ./getvnc vm_name2VMs found:3idc2-server1                     4idc2-server2获取指定VM的VNC端口号# ./getvnc idc2-server21run this on laptop and connect via vnc to localhost:5903 2--> ssh -L 5903:localhost:5903 root@10.100.1.30接下来,直接在本地运行"ssh -L 5903:localhost:5903 root@10.100.1.30";再通过VNC客户端连接"localhost:5903"就可以了。脚本代码: #!/bin/bash vm=$1 if [ -z ${vm} ]; then echo "Usage: $0 vm_name" echo "VMs found:" xl list-vm | awk '{print $3}' | grep -vw name exit 1 fi xe vm-list params=name-label name-label=${vm} | grep ${vm} > /dev/null if [ $? -gt 0 ]; then echo "Error: invalid VM name" exit 1 fi h...
Xen4 Libvirt for CentOS 6ContentsXen4 Libvirt for CentOS 6Libvirt and Xen BasicsPrerequisitesInstalling libvirt (on the Dom0 machine)Remote LibVirt AccessControlling libvirt Access via PolicyKitConfiguration of libvirt for Group AccessConnection to a Remote Dom0Connecting to the Dom0 Machine via virt-manager from a Remote MachineConnecting to the Dom0 Machine via virsh from a Remote MachineCreating DomU Virtual MachinesUsing virt-manager to install a DomUUsing virt-install to install a DomU via SSHControlling DomU Virtual MachinesExample Bridge Setup Libvirt and Xen BasicsThe HostOS install in Xen is known as Dom0. Virtual Machines (VMs) running via Xen are known as DomU's. By default, libvirt creates a Network Address Translation (NATed) 192.168.122.0/24 network behind the default network card (normally eth0). The DomU VMs running on this NATed network can connect to each other and connect outbound from the Dom0 network, but your only connection to them is via libvirt (or xen) on th...
CentOS 7.0 上源码安装 Xen 4.5上周 CentOS 7.0 正式版发布了,Xen4 CentOS 项目还没来得及更新支持 CentOS 7.0,所以目前要在 CentOS 7.0 上玩 Xen 的唯一办法只有编译源代码了。貌似这次 CentOS 没有发布 Minimal 安装版,下面的编译安装步骤在 LiveCD 安装版(CentOS-7.0-1406-x86_64-livecd.iso)上完成。安装需要的软件包安装完 CentOS 7.0 后第一件事就是启动 SSH 以便能从另外一台机器访问,新版本的 CentOS 引入了有争议的 systemd 进程管理器替代了 Unix 传统的 init 系统。开启服务使用 systemctl 命令:# systemctl start sshd # systemctl enable sshd 更新系统,并且安装编译 Xen 所需要的编译器、工具、软件库等:# yum update # yum groupinstall "Development Tools" # yum install -y gcc gcc-c++ git patch texinfo # yum install -y python-devel acpica-tools libuuid-devel ncurses-devel glib2 glib2-devel libaio-devel openssl-devel yajl-devel glibc-devel glibc-devel.i686 pixman-devel # wget http://mirror.centos.org/centos/6/os/x86_64/Packages/dev86-0.16.17-15.1.el6.x86_64.rpm # rpm -ivh dev86-0.16.17-15.1....
您所在的组无权查看该日志
    总共43页,当前第10页 | 页数:
  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20