本站用于记录日常工作内容,虚拟化云计算,系统运维,数据库DBA,网络与安全。
 ESXi5中用Console命令行打补丁  将补丁文件上传到ESXi服务器上,可以使用SSH或者“Browse Datastore”esxcli software vib update -d=”/vmfs/volumes/system/ESXI500-201111001.zip”注意:esxcli software vib install将不保留自定义的vib包esxcli software vib update保留自定义的vib包,建议用update
 VMware Workstation 8 启用支持ESXI.  在对应的虚拟机ESX的vmx文件中加入以下内容virtualHW.version = "9"monitor.virtual_mmu = “hardware”monitor.virtual_exec = “hardware”vhv.enable = “TRUE”monitor_control.restrict_backdoor = “true”
 ESXi 5.0中实现虚拟机 ESX,KVM,XEN的支持  在/etc/vmware/config中添加vhv.allow = “TRUE”这条参数并重启即可对应的虚拟机ESX的vmx文件中加入以下内容virtualHW.version = "9"monitor.virtual_mmu = “hardware”monitor.virtual_exec = “hardware”vhv.enable = “TRUE”monitor_control.restrict_backdoor = “true”最后,对虚拟机 进行硬件升级。
RemoteFX vGPU 安装程序和 Windows Server 2012 的配置指南请查看附件。 点击打开链接
iSCSI 多路径实现 KVM 高可用本文我们将以 Linux 平台为例,分别使用两台 Linux 主机来运行 iSCSI 目标和 iSCSI 启动器,然后在其中一台运行 iSCSI 启动器的主机上运行 KVM 虚拟机,两台主机之间存在多个网络连接,以此来实现提高 KVM 虚拟机的可用性。回页首配置 iSCSI(target 和 initiator)iSCSI (Internet Small Computer System Interface), 即 Internet 小型计算机系统接口,它是一种基于 TCP/IP 的协议。通过它可以在 IP 网络传送 SCSI 命令和数据,实现建立和管理网络存储设备、主机和客户机等之间的相互连接,并创建存储区域网络(SAN)。iSCSI 协议定义了在 TCP/IP 网络发送、接收数据块级别的存储数据的规则和方法。发送端将 SCSI 命令和数据封装到 TCP/IP 包中通过网络转发,接收端收到 TCP/IP 包之后,将其还原为 SCSI 命令和数据并执行,完成之后将返回的 SCSI 命令和数据再封装到 TCP/IP 包中再传送回发送端。而整个过程在用户看来,使用远端的存储设备就如同访问本地 SCSI 设备一样简单。其流程如下图:图 0. iSCSI 流程iSCSI target, 即 iSCSI 目标,它是 iSCSI 网络的服务器组件,通常是一个存储设备,用于包含所需的数据并回应来自 iSCSI 启动器的请求。...
QEMU 中使用 BIOS 简介BIOS 提供主板或者显卡的固件信息以及基本输入输出功能,QEMU 使用的是一些开源的项目,如 Bochs、openBIOS 等。QEMU 中使用到的 BIOS以及固件一部分以二进制文件的形式保存在源码树的 pc-bios 目录下。pc-bios 目录里包含了 QEMU 使用到的固件,还有一些 BIOS 以 git 源代码子模块的形式保存在 QEMU 的源码仓库中,当编译 QEMU 程序的时候,也同时编译出这些 BIOS 或者固件的二进制文件。QEMU 支持多种启动方式,比如说 efi、pxe 等, 都包含在该目录下,这些都需要特定 BIOS 的支持。清单 1. QEMU 源码树中的 BIOS 文件$ ls pc-bios/ acpi-dsdt.aml efi-rtl8139.rom openbios-ppc pxe-e1000.rom qemu_logo_no_text.svg slof.bin bamboo.dtb efi-virtio.rom openbios-sparc32 pxe-eepro100.rom qemu-nsis.bmp spapr-rtas bamboo.dts keymaps openbios-sparc64 pxe-ne2k_pci.rom qemu-nsis.ico spapr-rtas.bin bios.bin kvmvapic.bin optionrom pxe-pcnet.rom vgabios.bin efi-e1000.rom linuxboot.bin palcode-clipper pxe-rtl8139.rom s390-ccwvgabios-cirrus.bin efi-eepro100.rom petalogix-ml605.dtb pxe-virtio.rom s390-ccw.img ...
Quick Start Guide 目录 [隐藏] 1 介绍1.1 安装要求1.1.1 oVirt Engine的要求1.1.2 主机/节点 (oVirt 节点, Fedora主机, CentOS 主机)1.1.3 存储和网络1.1.4 支持的虚拟机操作系统2 安装 oVirt2.1 安装 oVirt Engine 到(Fedora / Red Hat Enterprise Linux / CentOS)2.2 安装主机/节点2.2.1 安装 oVirt Node(节点)2.2.2 在Fedora 或 CentOS的主机中安装3 访问 oVirt Engine3.1 登录到管理员门户4 配置 oVirt4.1 配置数据中心4.2 配置集群4.3 配置网络4.4 配置主机4.4.1 配置 oVirt 节点的主机4.4.2 加入 Fedora 或 CentOS 的主机4.5 配置存储4.5.1 创建NFS 数据域4.5.2 创建iSCSI 数据域4.5.3 创建 FCP 数据域4.5.4 附加 ISO 域4.5.5 上传 ISO 镜像5 创建虚拟机5.1 创建一个安装Fedora的虚拟机5.1.1 创建安装Windows 7 的虚拟机5.1.1.1 使用VirtIO磁盘的安装方式5.1.1.2 通过IDE硬盘安装5.1.2 后续步骤Post Install Additions5.1.2.1 驱动5.2 使用模板5.2.1 创建基于Fedora的模板5.2.2 复制 Red Hat Enterprise Linux 虚拟机5.2.3 创建基于Windows 的模板5.2...
The oVirt Project is pleased to announce the availability of its sixth formal release, oVirt 3.5.oVirt is an open source alternative to VMware vSphere, and provides an excellent KVM management interface for multi-node virtualization.To find out more about features which were added in previous oVirt releases, check out the oVirt 3.4 release notes, oVirt 3.3 release notes, oVirt 3.2 release notes and oVirt 3.1 release notes. For a general overview of oVirt, read the oVirt 3.0 feature guide and the about oVirt page.Contents [hide] 1 oVirt 3.5 Release Notes1.1 Live Merge1.2 Import Storage Domain1.3 Advanced Foreman Integration1.4 Enhanced Authentication, Authorization and Accounting Support1.5 New PatternFly Interface1.6 Advanced Scheduling with Optaplanner1.7 Other Enhancements1.7.1 Infra1.7.2 Networking1.7.3 Integration1.8 Install / Upgrade from Previous Versions1.8.1 Fedora / CentOS / RHEL1.9 oVirt Live1.10 oVirt Node1.11 [Click to Show/Hide]Known Issues1.12 [Click to Show/H...
Example using SPICE and QXL for improved Graphics experience in the guest This example is based on qemu-kvm (0.15.0) as installed in Fedora 15.Will first show how to do this manually, and second how to do it using the virt-manager tool.For extensive details on SPICE, visit http://spice-space.org Manually, using qemu-kvm command line directly The simplest part is adding the qxl graphics device to the guest. By adding -vga qxl:# /usr/bin/qemu-kvm -m 1024 -name f15 -drive file=/images/f15.img,if=virtio -vga qxl You'll see a normal SDL window pop up, and if you poke inside the guest you'll find that X is using the qxl device to display:$ lspci ... 00:02.0 VGA compatible controller: Red Hat, Inc. Device 0100 (rev 03) $ grep QXL /var/log/Xorg.0.log [ 15.878] (II) qxl: Driver for QXL virtual graphics: QXL 1 However, this is not enough to use SPICE. We need to enable the spice server in qemu-kvm. We also need a client to view the guest. So first be sure the client is installed...
We are pleased to announce the first release of KVMGT project. KVMGT is the implementation of Intel GVT-g technology, a full GPU virtualization solution. Under Intel GVT-g, a virtual GPU instance is maintained for each VM, with part of performance critical resources directly assigned. The capability of running a native graphics driver inside a VM, without hypervisor intervention in performance critical paths, achieves a good balance of performance, feature, and sharing capability. KVMGT is still in the early stages:Basic functions of full GPU virtualization works; guests can see a full-featured vGPU.We ran several 3D workloads such as lightsmark, nexuiz, urbanterror and warsow.Only Linux guests are supported so far, and PPGTT must be disabled in guest through a kernel parameter(see README.kvmgt in QEMU).This drop also includes some Xen specific changes, which will be cleaned up later.Our end goal is to upstream both XenGT and KVMGT, which shares ~90% logic for vGPU devi...
    总共54页,当前第25页 | 页数:
  1. 15
  2. 16
  3. 17
  4. 18
  5. 19
  6. 20
  7. 21
  8. 22
  9. 23
  10. 24
  11. 25
  12. 26
  13. 27
  14. 28
  15. 29
  16. 30
  17. 31
  18. 32
  19. 33
  20. 34
  21. 35