[{"content":"These articles include guides and reference materials.\n","date":"2019-12-23","externalUrl":null,"permalink":"/articles/","section":"Articles","summary":"","title":"Articles","type":"articles"},{"content":"","date":"2019-12-23","externalUrl":null,"permalink":"/tags/fedora/","section":"Tags","summary":"","title":"Fedora","type":"tags"},{"content":" Last updated for Fedora 31.\nThis document covers some of the patterns I follow when installing Fedora systems. They may not be appropriate for all situations, and there are many considerations that are not covered here. After installing a system, I usually connect it to a configuration management system that implements a more secure and robust configuration.\nVirtual Machines on QEMU/KVM If using virt-manager, hit the checkbox to customize before installation.\nUse the Q35 chipset Use UEFI firmware -- Secure Boot is recommended but optional Add a VirtIO SCSI controller, and attach disks to the SCSI bus (as opposed to VirtIO) Network devices should be VirtIO For the display, Spice is generally more responsive, but VNC is more compatible (and directly usable within Cockpit) Video device should be QXL When storing disk images as files:\nUse the qcow2 storage format Cache mode: none IO mode: native Discard mode: unmap Detect zeroes: unmap Firmware Ensure that UEFI (as opposed to legacy BIOS) is the configured boot method, where supported.\nIf legacy BIOS is the only option available, then the storage configuration must be somewhat different. A few extra precautions in the storage setup will make it possible to migrate the installed system to UEFI later.\nInstallation These instructions assume that a Fedora Server installation image is being used.\nNetwork Configure network settings to allow for Internet access following installation.\nAccounts The root account should be disabled. This is now the default. A separate administrator account should be created.\nPackages The default package set can be used if the system will be connected to a configuration management system.\nStorage The basic objective is to use LVM for as much of the storage layout as possible. The EFI System Partition (ESP) cannot be placed on LVM. The boot partition may be able to live on LVM, but then the volume group cannot be encrypted, and this would have a negative impact on survivability and troubleshooting.\nSingle Disk For an installation on a single disk:\nChoose the \"Custom\" storage configuration option. Click on the \"Click here to create them automatically\" option. Select the \"/\" partition, then hit \"Modify...\" under the Volume Group configuration. Enter a reasonable volume group name. I often use \"servername_vg00\" when the server name is likely to be stable, or just \"vg00\" otherwise. Change \"Size policy\" to \"As large as possible\" so that the volume group will fill out the disk. Hit \"Save\". See the \"Additional Volumes\" section, below.\nMake any other needed customizations.\nSoftware RAID Bugs Ahead Over the years, I have run into a variety of bugs in Fedora's installer when setting up customized storage layouts. In some cases, I have had to manually create all of the volumes before starting the installer to get the desired layout. In other cases, Fedora has failed to correctly set up the boot environment, requiring additional steps after installation to get the system booting properly. In most cases, the bugs simply cause the installer to crash.\nFor an installation on multiple disks in a software RAID configuration:\nI previously recommended layering LVM on top of mdraid devices. This is no longer the case. LVM RAID is now up to the task and offers a more flexible option.\nRed Hat's documentation for RAID logical volumes is here:\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/assembly_configure-mange-raid-configuring-and-managing-logical-volumes Choose the \"Advanced Custom (Blivet-GUI)\" storage configuration option.\nOn the first disk in the array, create a new partition:\n1 GiB partition Filesystem: EFI System Partition Label: ESP Mountpoint: /boot/efi For each subsequent disk in the array, create a new partition, replacing \"X\" with the number of the disk in the sequence (sdb = 1, sdc = 2, ...):\n1 GiB partition Filesystem: EFI System Partition Label: ESPX Mountpoint: /boot/efiX These additional ESPs are placeholders. The contents of the first ESP will need to be synchronized to them for them to function as backups in the event that the first disk is lost. Unfortunately, there is no support for ESPs on any software RAID.\nCreate a software RAID device for /boot:\nSelect the free space on one of the disks and click the \"+\" button. Select the \"Software RAID\" device type. Place a checkbox next to every disk in the array. Select RAID level \"raid1\". Size the partition to 1 GiB. Filesystem: xfs Label: boot Name: boot Mountpoint: /boot Create the LVM volume group:\nSelect the free space on one of the disks and click the \"+\" button. Select the \"LVM2 Volume Group\" device type. Place a checkbox next to every disk in the array. Give the volume group a name, such as \"servername_vg00\" or \"vg00\". If desired and appropriate, check the \"Encrypt\" option. Hit \"OK\". Select the new volume group in the left pane, then create the logical volumes in it. Do not allow the total size of the volumes to exceed the capacity of one disk. Be conservative in sizing, as you can enlarge the volumes later if needed.\nFor \"/\":\nFilesystem: xfs Label: sysroot Name: sysroot Mountpoint: / For swap:\nFilesystem: swap Label: swap Name: swap There are many opinions on what the right amount of swap space is. In reality, there are many factors that may come into play. Red Hat's guidance is here:\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_storage_devices/getting-started-with-swap_managing-storage-devices See the \"Additional Volumes\" section, below.\nMake any other needed customizations.\nRAID: Further Setup Required At this point, the volumes are not protected by RAID. They are linear volumes. RAID will be configured after installation.\nWhen saving this configuration, the installer will issue a warning as a result of the ESP (the \"stage1 device\") not being on an array. Ignore this warning for now -- synchronizing the ESP to the other disks will have to be handled differently.\nAdditional Volumes In many cases, additional volumes will be appropriate. The following volumes are good candidates for placement on separate volumes, and this may even be required by some policies:\n/home /tmp /var /var/log /var/log/audit Depending upon how the system will be utilized, some other paths to consider placing on separate volumes may include:\n/opt /var/lib/libvirt Post-Install If the system does not support UEFI and is configured for legacy BIOS booting, run \"grub2-install\" against each system disk.\ngrub2-install /dev/sdb Modify the /etc/hosts file to reflect the correct hostname and domain.\nModify the /etc/aliases file to provide an email alias for the root user.\nConnect the system to the correct configuration management solution.\nVirtual Machine Hosts The following bash snippet can be used to set up a bridge. Replace \"enp8s0\" with the correct physical interface. If you need a static IP configured on the interface, then uncomment and update the corresponding lines. This script will temporarily disrupt connections on the interface, so be careful if running it remotely.\ndnf -y install bridge-utils export MAIN_CONN=enp8s0 bash -x \u0026lt;\u0026lt;EOS nmcli c delete \"$MAIN_CONN\" nmcli c delete \"Wired connection 1\" nmcli c add type bridge ifname br0 autoconnect yes con-name br0 stp off #nmcli c modify br0 ipv4.addresses 192.168.1.100/24 ipv4.method manual #nmcli c modify br0 ipv4.gateway 192.168.1.1 #nmcli c modify br0 ipv4.dns 192.168.1.1 nmcli c add type bridge-slave autoconnect yes con-name \"$MAIN_CONN\" ifname \"$MAIN_CONN\" master br0 EOS nmcli nmcli supports shortened versions of many of its parameters, such as \"c\" for \"connection\" above and \"mod\" for \"modify\" below. See the \"NOTES\" section from man nmcli for more.\nIf you need to put the new bridge interface in a different firewall zone (\"work\", in this example):\nnmcli c mod br0 connection.zone work ","date":"2019-12-23","externalUrl":null,"permalink":"/articles/fedora-setup/","section":"Articles","summary":"","title":"Fedora Setup","type":"articles"},{"content":"","date":"2019-12-23","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":" I appreciate you visiting my little corner of the Internet. I am an Information Technology and Information Security expert in San\u0026nbsp;Antonio, Texas. I currently specialize in Product and Engineering Management, DevOps, and cybersecurity research. Please understand that I rarely post here or anywhere else publicly as an individual. ","date":"2019-12-23","externalUrl":null,"permalink":"/","section":"Welcome!","summary":"","title":"Welcome!","type":"page"},{"content":"","date":"2019-09-19","externalUrl":null,"permalink":"/tags/powershell/","section":"Tags","summary":"","title":"PowerShell","type":"tags"},{"content":"If you are building software on Windows, you may need to script execution of the build. PowerShell is natively available on modern Windows systems and provides a powerful and flexible scripting solution.\nThis script snippet can be used to locate a Visual Studio 2017 installation with MSBuild. It will take care of loading the Visual Studio environment variables, after which you can easily invoke MSBuild or another tool that will use MSBuild (like CMake).\nYou may need to add additional \"-requires\" arguments to restrict the instances of Visual Studio to those that have the required components installed. You can update the \"-version\" argument if you need to locate a different version of Visual Studio.\nAs a bonus, this snippet can be used to find CMake and add it to the PATH environment variable:\n","date":"2019-09-19","externalUrl":null,"permalink":"/articles/using-msbuild-with-powershell/","section":"Articles","summary":"","title":"Using MSBuild with PowerShell","type":"articles"},{"content":"","date":"2019-09-19","externalUrl":null,"permalink":"/tags/windows/","section":"Tags","summary":"","title":"Windows","type":"tags"},{"content":"Purpose When setting up virtual machines on a single host that need to be interconnected, it can be extremely useful to have your own virtual network where the machines can have static IP addresses that won't change, even if the host machine is connected to different networks. To accomplish this on Hyper-V, you can create a new virtual switch that uses NAT.\nSteps Launch a new PowerShell session as an administrator.\nCreate the new virtual switch for an \u0026quot;Internal\u0026quot; network:\nNew-VMSwitch -SwitchName \"NATSwitch\" -SwitchType Internal Assign an IP address to the new switch device:\nNew-NetIPAddress -IPAddress 192.168.122.1 -PrefixLength 24 -InterfaceAlias \"vEthernet (NATSwitch)\" Set up NAT:\nNew-NetNAT -Name \"NATNetwork\" -InternalIPInterfaceAddressPrefix 192.168.122.0/24 Now, you can attach your virtual machines to the new switch and configure them with static IP addresses. If you used the commands above verbatim, then you can pick any addresses from 192.168.122.2 through 192.168.122.254.\nNo DHCP or DNS This virtual NAT network does not have DHCP or DNS services. Unless you set these up separately, you will need to configure static IP addresses and DNS servers on virtual machines attached to this virtual network.\n","date":"2019-09-18","externalUrl":null,"permalink":"/articles/hyper-v-virtual-nat-switch-setup/","section":"Articles","summary":"","title":"Hyper-V Virtual NAT Switch Setup","type":"articles"},{"content":"","date":"2019-09-18","externalUrl":null,"permalink":"/tags/networking/","section":"Tags","summary":"","title":"Networking","type":"tags"},{"content":"","date":"2019-09-18","externalUrl":null,"permalink":"/tags/virtualization/","section":"Tags","summary":"","title":"Virtualization","type":"tags"},{"content":" I've given up on the idea that I'll ever actually post new entries. ","date":"2019-01-23","externalUrl":null,"permalink":"/blog/","section":"Blog","summary":"","title":"Blog","type":"blog"},{"content":" I've finally completed a migration I felt was long overdue. I've been meaning to replace my Drupal-based website with something more modern for a while now. I wanted to replace it with a site built around serverless concepts, and I have finally done so. This new version of my website should be faster, more reliable, easier for me to maintain, and cheaper to host (not that the old one was expensive). I welcome feedback on the look and feel, especially if you see any problems. I hope to follow this shortly with some more substantial new posts. Now, I just need to try posting new stuff a bit more often. Six years is too long to go between posts. ","date":"2019-01-23","externalUrl":null,"permalink":"/blog/new-website/","section":"Blog","summary":"","title":"New Website","type":"blog"},{"content":"I recently received a report that attachments sent to Gmail from some servers were being corrupted. At first, I assumed that the reporter was mistaken, or that perhaps the problem was with the sender's mail client or server. One of my colleagues had already conducted some tests of his own and found that PDFs and TIFFs he tested with were indeed being corrupted. I had to investigate. Some quick tests proved that the reporter and my colleague were correct. Below is detailed information about the tests I conducted and my findings.\nThe Tests The Servers For my tests, there are three groups of servers involved: my personal mail server (we'll call this the \"PWB server\"), my employer's mail servers (the \"LT servers\") and Google's mail servers (the \"Gmail servers\"). The PWB server's MTA is Postfix. The LT servers include Postfix relays and Kerio Connect mail servers. Mail sent out from the LT servers is first handled by Kerio Connect, then relayed to the outside world by Postfix.\nThe Attachment I decided to limit my test to a single attachment - a TIFF file I picked out of convenience. This file is named eyes_color.tif and it is 196926 bytes in size.\nThe Emails I conducted several tests, but limited this analysis to a representative batch:\nTest 1 - An email sent from LT to PWB. The attachment arrived in-tact. The result, as saved from PWB, is in the file test-good-lt_2_pwb.mbox. Test 2 - An email sent from PWB to Gmail. The attachment arrived in-tact. The result, as saved from Gmail's web interface, is in the file test-good-pwb_2_gmail.mbox. Test 3 - An email sent from LT to Gmail. The attachment was corrupted. The result, as saved from Gmail's web interface, is in the file test-bad-lt_2_gmail-1.mbox. Test 4 - Another email sent from LT to Gmail. The attachment was corrupted, but not in the same way as Test 3. The result, as saved from Gmail's web interface, is in the file test-bad-lt_2_gmail-2.mbox. The Results From all four tests, I extracted the base64-encoded attachment. The results from Test 1 and Test 2 matched, and decoding those gave back the original TIFF. The SHA1 hashes verified this. This correct base64 content is in good.base64. Both Test 3 and Test 4 included corrupted bytes - just a few each. The extracted base64 content was the correct size, but each had a few bytes replaced with non-ASCII characters. The corruption was different between the two and seemingly random. Test 3's extracted base64 content is in bad-1.base64, while Test 4's is in bad-2.base64. Running diffs between Test 3's attachment and the correct base64 content and between Test 4's attachment and the correct base64 content yielded the bad-1_v_good.patch and bad-2_v_good.patch, respectively. When viewed in Gmail's web interface, the attachments from Test 3 and Test 4 fail to show previews and, when downloaded, are not viewable in an image viewer. The attachments downloaded from the web interface do not match the original file sent in those emails, verified by SHA1 hashes.\nThe Findings These tests are representative of all of the tests I conducted. Gmail seems to corrupt attachments sent from some, but not all, servers. I do not know why, and I do not see a pattern in how the attachments are corrupted. When the same sending servers deliver mail to other servers, the attachments arrive in perfect condition. The corruption seems to be conditional upon the sending server, as I get consistent results with repeated tests from any given account. I have only used a few accounts on each server to conduct my tests, so it may be conditional upon the sending mail account, but this seems less probable. In each corrupted attachment, a different handful of seemingly random bytes have been replaced with non-ASCII characters. I know that this corruption affects multiple file types, but I do not know if all file types are affected.\nUpdate 1 With assistance from folks at Google, I have identified a probable source of the corruption in the network path between the affected sending servers and the Gmail servers. I do not yet know why other receiving servers are unaffected, but it may be a difference in error detection and correction behavior (like TCP checksum behavior) or a performance difference that affects the chances of corruption. If the affected network provider gets their problem fixed, I will conduct further testing.\nComments from this post were discarded during a website migration.\nReferenced files have been removed.\n","date":"2013-01-09","externalUrl":null,"permalink":"/blog/gmail-corrupting-attachments/","section":"Blog","summary":"","title":"Gmail Corrupting Attachments?","type":"blog"},{"content":" There are many different ways that organizations can manage customer lists and deliver email to all of their customers at once. Some mailers will generate a unique email to each customer, possibly replacing fields in a form letter, while others will basically use the \"BCC\" field to send a one email to many recipients. An important characteristic of these methods is that the recipients will not be able to see each other's email addresses. Today, Cardstore.com sent out an email to customers without using either of the above methods. This email contained thousands of Cardstore.com's customer email addresses in the email's \"To:\" field, meaning that every recipient of the email could see every other email address that the message was sent to. Database breaches have become extremely common. LinkedIn and Last.fm are both recent examples of popular websites to suffer database breaches that exposed customer details. These breaches have been the result of hackers, but Cardstore.com cut out the middle-man and just sent out their customer list themselves. This kind of breach is unacceptable and should never have been allowed to happen. Great care should always be taken in the handling of customer information, and checks should be in place to make sure that errors like this are avoided. Comments from this post were discarded during a website migration.\n","date":"2012-06-26","externalUrl":null,"permalink":"/blog/how-not-email-your-customers/","section":"Blog","summary":"","title":"How Not to Email Your Customers","type":"blog"},{"content":"","date":"2012-06-26","externalUrl":null,"permalink":"/tags/security/","section":"Tags","summary":"","title":"Security","type":"tags"},{"content":" Something amazing happened on Wednesday. It's doubtful you missed it, but you might not have recognized just how amazing it really was. The unprecedented Internet blackout showed us something incredible: Major websites demonstrated the ability to quickly sway political dialogue. It has been easy to see for many years that big media can influence political dialogue. Media slant, sometimes the result of unintentional bias and sometimes the result of direct influential efforts, has had an impact on many political discussions and legislative proceedings over the years. The impact of lobbyists employed by big media companies is even easier to identify. On Wednesday, something happened that has never happened before: a coordinated effort by several of the most heavily-trafficked websites changed the course of political dialogue. Following Wednesday's blackout protests, SOPA has been pulled from consideration and PIPA is effectively dead in the water. Politicians were quick to react when overwhelmed by feedback from their constituents. This is a strong message that shows just how much power new media now wields. It took a unique circumstance to bring these major Internet players together behind a common cause, but now we know that it can happen. As much as I would like to see the general public exercise greater everyday vigilance toward lawmaking, I do not think we will see much change there as a result of this event. What we might see is a stronger and more unified political voice from new media companies interested in protecting the greatest tool for information exchange ever created. I certainly hope that this will at least remind lawmakers that they should seek more input from major Internet organizations when crafting laws affecting the Internet. I do not doubt this event will impact legislative discourse in the future. It may prove difficult to trace the effects, but the world did change on the 18th of January, 2012, and I suspect for the better. Comments from this post were discarded during a website migration.\n","date":"2012-01-21","externalUrl":null,"permalink":"/blog/something-amazing-just-happened/","section":"Blog","summary":"","title":"Something Amazing Just Happened","type":"blog"},{"content":"","date":"2012-01-18","externalUrl":null,"permalink":"/tags/freedom/","section":"Tags","summary":"","title":"Freedom","type":"tags"},{"content":" Today, many websites are participating in a blackout inspired by the Stop Online Piracy Act (\"SOPA\", H.R.3261) and PROTECT IP Act (\"PIPA\", S.968) bills currently being considered by Congress. These two bills, which are very similar to one another, are intended to extend copyright protections and enable better defenses against copyright infringement by international websites. These bills have caused significant uproar among Internet companies and technologists, as they raise a number of concerns with regard to Internet freedom, censorship and security. Major Internet players that have announced their opposition to these bills include AOL, eBay, Facebook, Google, imgur, LinkedIn, Mozilla, Reddit, Twitter, Wikipedia, Yahoo! and Zynga. Technology experts have also expressed concerns (PDF) over potential problems with the implementation of the bills' measures. SOPA and PIPA are intended to give copyright holders the tools they need to bring down and block access to websites hosting copyright-infringing materials. While it is easy to see how blocking copyright-infringing websites would be desirable, concerns include that these tools may be too broad, that they may be abused, and that the burden on websites to avoid infringing or linking to infringers could be too great. Today's blackout is intended to raise awareness about these two bills. Visitors to Google will see a large black box over the Google logo, accompanied by a link to information placed on the homepage. Wikipedia has blocked access to most of its English-language pages. Reddit, imgur and others have completely gone dark, replacing their homepages with messages about these bills. Today's blackout is unprecedented in the history of the Internet. Unanswered Questions Could Google be expected to find and remove all of the infringing websites among the over one trillion URLs it has indexed, and to continue monitoring each and every one of them for new infringement? Many infringers are going to try hard to avoid scrutiny, and there is a very large grey area where it might be hard to decide what constitutes infringement. What happens when they accidentally identify false positives? When they miss some infringement? These are the concerns that search engines face. What about sites like Facebook, Twitter and Wikipedia, which depend upon user-submitted content? They cannot possibly filter every single URL that passes through, and they could be deluged with takedown demands if they do not. The free flow of information and ideas that normally takes place on social networks would be stifled, and communities like Wikipedia that are driven by user contributions could be overburdened by the administrative demands. However legitimate websites might be forced to filter their content, we are facing a form of censorship never before seen on the Internet. In spite of whatever good intentions might be behind SOPA and PIPA, the burden they place on legitimate websites and the threat they present to the freedom of the Internet cannot be ignored. The Danger to Small Businesses and Individuals Small businesses and individuals running websites would be most vulnerable to unintended consequences of SOPA and PIPA. Even small websites, blogs and forums could be forced to censor content or face being shut down. Funding for small websites that host user-generated content, whether in the form of comments and discussions, videos, articles or anything else, would be harder to come by when those websites could be held liable for infringing content. Operators of such websites would also face an increased risk of lawsuits, justified or not, which could prove too expensive to fight. Other tools used by small businesses could also be endangered. Mailing lists, code repositories, VPNs and more could pose liability concerns. The Need Copyright infringement is an expensive problem for American businesses. Content producers and publishers lose a great deal of money to piracy every year. Estimates on the actual losses vary greatly, but \"many billions\" is a good guess. Many websites that host the infringing material are outside of the United States, often in places that do not offer strong protection for intellectual property. This presents a challenge for American businesses, as it can be impossible to sue the infringers or their hosting providers, and the Internet as it is does not offer a way to shut down these sites. SOPA and PIPA are intended to answer this need by giving copyright holders a way to cut infringing sites off from the traffic that sustains them. Most of the opponents of SOPA and PIPA recognize that defending intellectual property is important - they often depend upon it themselves. The objection is over how these bills propose to cut infringing sites off. Google and other opponents of these bills do have an alternative in mind: the OPEN Act. The OPEN Act aims to cut off infringing sites by stopping the flow of money to them. Status SOPA has been temporarily halted in the House, with discussion expected to resume next month. PIPA is expected to go before the Senate on the 24th of January. More Information List of blackout participants Wikipedia article on SOPA Wikipedia article on PIPA List of supporters and opponents in Congress Senate contact list House contact list Google's petition The OPEN Act Electronic Frontier Foundation articles:\nHow PIPA and SOPA Violate White House Principles Supporting Free Speech and Innovation SOPA: Hollywood Finally Gets A Chance to Break the Internet January 18: Internet-Wide Protests Against the Blacklist Legislation Protest Letters:\nFrom human rights groups From law professors From Internet companies Comments from this post were discarded during a website migration.\n","date":"2012-01-18","externalUrl":null,"permalink":"/blog/sopa-pipa-and-internet-blackouts/","section":"Blog","summary":"","title":"On SOPA, PIPA and the Internet Blackouts","type":"blog"},{"content":"","date":"2011-06-07","externalUrl":null,"permalink":"/tags/cloud/","section":"Tags","summary":"","title":"Cloud","type":"tags"},{"content":" I recently found myself needing a more bleeding-edge cloud server than the Fedora 14 servers I have been running on Rackspace Cloud. Rackspace is not yet offering a Fedora 15 image for new servers, so I needed to start with a Fedora 14 system and upgrade it. I also needed the kernel to be more current than the 2.6.34.1 kernel Rackspace currently uses with Fedora images, and I am not sure the upgraded userspace would work with that kernel and init image pair anyway. This meant I needed to use PV-GRUB to use the stock Fedora kernel. What follows is a description of the process I used to get Fedora 15 running with the stock Fedora kernel on Rackspace Cloud. Rackspace Cloud, like Amazon AWS and many other VPS providers, uses the Xen hypervisor. Under a typical configuration, custom kernel and init images are used for each VPS, rather than images stored within the VPS. The kernel used with Rackspace's Fedora 14 image is a custom kernel built on Ubuntu. Thankfully, Rackspace does allow operators to use PV-GRUB to load other kernels. If you are considering trying this process on a production system, seek therapy. Start with a new server and migrate your services once you're done. If this doesn't go smoothly, you could be left with a server that will not boot and no recourse but to restore a backup. Quick Overview The general process goes something like this:\nSet up a new server with Rackspace's Fedora 14 image Configure the system to run as a Xen domU loaded by PV-GRUB Install the Fedora kernel Contact Rackspace to enable PV-GRUB Upgrade the system to Fedora 15 Before the Kernel The fist step is to set up a server loaded from Rackspace's Fedora 14 image to run as a Xen domU loaded by PV-GRUB.\ncat \u0026gt;\u0026gt; /etc/modprobe.d/domu.conf \u0026lt;\u0026lt; EOF alias eth0 xennet alias eth1 xennet alias scsi_hostadapter xenblk EOF sed -i 's/sda/xvda/g' /etc/fstab echo \"hvc0\" \u0026gt;\u0026gt; /etc/securetty Install the Kernel The next step is to install the Fedora stock kernel.\nyum install kernel Take note of the kernel version. You'll need it in the next step. You can get the exact version string you need with this command:\nrpm -q kernel --qf '%{version}-%{release}.%{arch}\\n' Time to create the configuration file required by PV-GRUB. We'll create this as grub.conf and create a symlink at menu.lst, which is how the grub configuration is normally created on Fedora. In the following, replace \"$KERNELVERSION\" with the correct kernel version string.\nmkdir -p /boot/grub cat \u0026gt;\u0026gt; /boot/grub/grub.conf \u0026lt;\u0026lt; EOF #boot=/boot/grub/stage1 default=0 timeout=1 title Fedora ($KERNELVERSION) root (hd0) kernel /boot/vmlinuz-$KERNELVERSION ro console=hvc0 root=/dev/xvda1 SYSFONT=latarcyrheb-sun16 LANG=en_US.UTF-8 KEYTABLE=us initrd /boot/initramfs-$KERNELVERSION.img EOF ln -s ./grub.conf /boot/grub/menu.lst Since we're going to use the stock Fedora kernel, we can set things up for yum to handle kernel updates and have the grub.conf file updated automatically. The commented-out boot directive in our grub.conf is part of this configuration. To finish this setup, we'll need to install grub and grubby and create a few more configuration items.\nyum install grub grubby cp /usr/share/grub/x86_64-redhat/stage1 /boot/grub/stage1 ln -s ../boot/grub/grub.conf /etc/grub.conf cat \u0026gt;\u0026gt; /etc/sysconfig/grub \u0026lt;\u0026lt; EOF boot=/boot/grub/stage1 forcelba=0 EOF cat \u0026gt;\u0026gt; /etc/sysconfig/kernel \u0026lt;\u0026lt; EOF # UPDATEDEFAULT specifies if new-kernel-pkg should make # new kernels the default UPDATEDEFAULT=yes # DEFAULTKERNEL specifies the default kernel package type DEFAULTKERNEL=kernel EOF Test that grubby detects grub.\ngrubby --bootloader-probe Enable PV-GRUB At this point, it's time to contact Rackspace Support to enable PV-GRUB. They will enable PV-GRUB and reboot your server. With a little luck, you server will start up with your stock kernel. Reconnect to your server and verify.\nuname -a At this point, you should consider creating an on-demand backup.\nUpgrade to Fedora 15 Now that you are running on the stock Fedora 14 kernel, you are ready to upgrade your server to Fedora 15 (and its kernel). The general Fedora guidance is here: https://fedoraproject.org/wiki/Upgrading_Fedora_using_yum Because we're dealing with a cloud server with PV-GRUB, there will be a few differences. You won't be switching to a text console or changing runlevels. You will not want to install the other Base packages. You cannot write a new MBR with grub-install. Really, our process is much simpler. To be safe, we'll use screen to launch our upgrade to mitigate the risk of a disconnect. (I recommend always using screen when using yum or doing anything else critical over SSH.)\nyum install screen screen -h 10000 -S yum This screen command will increase the default history size and name the session for easy access. If you become disconnected during the upgrade, connect to the server again and run the following command to attach to your screen session:\nscreen -rx yum Time to perform the upgrade.\nrpm --import https://fedoraproject.org/static/069C8460.txt yum update yum yum clean all yum --releasever=15 --disableplugin=presto distro-sync cd /etc/rc.d/init.d; for f in *; do /sbin/chkconfig $f resetpriorities; done ln -sf /lib/systemd/system/multi-user.target /etc/systemd/system/default.target Make sure that the new Fedora 15 kernel is configured and set as default in /boot/grub/grub.conf.\nProfit Reboot. A little more luck and your server will come back up running the new Fedora 15 kernel and userspace. You may want to create an on-demand backup at this point. You can reuse this image to create more Fedora 15 servers, but do not forget to contact Rackspace Support to enable PV-GRUB for each instance. If you created an on-demand backup after getting Fedora 14 running with PV-GRUB, you can now delete that image. Have I missed anything? Let me know.\nComments from this post were discarded during a website migration.\n","date":"2011-06-07","externalUrl":null,"permalink":"/blog/fedora-15-stock-kernel-rackspace-cloud/","section":"Blog","summary":"","title":"Fedora 15 with Stock Kernel on Rackspace Cloud","type":"blog"},{"content":" I have really enjoyed my role as an administrator for Fedora's participation in Google's Summer of Code. It has been a very rewarding experience and I have been both grateful for and proud of all of the mentors and students that have participated. Karsten Wade has been extremely helpful as a fellow administrator and has really helped Fedora's participation succeed. Last week, he wrote that he would like to pass the reigns. His work on Fedora's own student programs is extremely important, and I am pleased to see that he is focusing on that work. As much as I enjoy working as an administrator for Fedora's GSoC participation, I also have too much on my plate to give the time and attention to the program that it really needs and deserves. Therefore, I too must step aside and allow others to take a turn at the wheel. Both the administrator and mentor roles in the Summer of Code are wonderful opportunities. Karsten lists the benefits and caveats of the administrator role in his post, and I think he sums them up perfectly, so I will not repeat them. There are some important points I would like to add about being an administrator with Fedora for the Summer of Code: All projects chiefly sponsored by Red Hat are required to participate together, so administrators can come from the communities of Fedora, JBoss.org or other Red Hat-sponsored projects and must be prepared to coordinate efforts across communities. This is not a full-time job. If you can spare a few hours a week and respond in a timely manner to communications, you can handle the workload. There are a few deadlines to be met, and you might have to make an IRC meeting or two. Karsten and I will both be \"around\" to help by answering questions and offering guidance. I do not have the time to handle all of the administrative duties, but I will be happy to help as much as I can. One of the biggest challenges in the past has been recruitment - gathering administrators, mentors, ideas and students. In some ways, this has gotten easier, but know that this is an important part of the job. Fedora's participation in GSoC 2011 is not guaranteed. The new administrators will need to ready the prerequisites and prepare the application when the application window opens. If you might like to serve as an administrator for Fedora's participation in Google's Summer of Code, please contact me right away. Comments from this post were discarded during a website migration.\n","date":"2010-12-28","externalUrl":null,"permalink":"/blog/seeking-new-admins-fedora-and-gsoc/","section":"Blog","summary":"","title":"Seeking New Admins for Fedora and GSoC","type":"blog"},{"content":"","date":"2010-12-28","externalUrl":null,"permalink":"/tags/summer-coding/","section":"Tags","summary":"","title":"Summer Coding","type":"tags"},{"content":" The efforts of Fedora's Summer Coding SIG and our umbrella \"Red Hat Summer\" effort got a small setback today. After 5 years of successful participation in the Google Summer of Code program, we were not accepted into this year's Summer of Code. While this was unexpected and a little disappointing, it does not stop our summer coding work. In 2008, Google required the Fedora Project and JBoss.org teams to apply as a single organization to the Summer of Code, since both are Red Hat-sponsored open source projects. While that created some hurdles for us, since we are two very distinct communities with almost no overlap, we met the challenge head-on and created our \"Red Hat Summer\" group to coordinate our efforts and bring the teams together for our common goal. \"Give a man a fish, and you have fed him for a day. Teach a man to fish, and you have fed him for a lifetime.\" As soon as Fedora and JBoss.org began working cooperatively, we also began to focus on making our efforts and resources more generic, such that we could take \"Google Summer of Code\" and replace \"Google\" and \"Code\" with any sponsor and any deliverable of interest. We began slowly working on an architecture that would allow us to support other seasonal development for students without a strict dependency on Google's program. We have greatly appreciated what Google has done with the Summer of Code program, and we are disappointed that we will not be participating this year. I have enjoyed my role as a mentor since 2005 and organization administrator since 2006, and my summer just won't be the same, but we are going to take the opportunity to quickly advance our non-GSoC summer coding initiatives. We will still welcome students to work under our guidance this year, and we are hard at work to find sponsors to offer some kind of stipend to make it easier for students to participate. We are even looking at how Red Hat can itself be one of these sponsoring organizations, outside of the internship program Red Hat continues to run. Thank you, Google, for getting us started. We will take it from here. Comments from this post were discarded during a website migration.\n","date":"2010-03-18","externalUrl":null,"permalink":"/blog/so-long-and-thanks-all-fish/","section":"Blog","summary":"","title":"So Long, and Thanks for All the Fish","type":"blog"},{"content":"","date":"2010-02-11","externalUrl":null,"permalink":"/tags/malware/","section":"Tags","summary":"","title":"Malware","type":"tags"},{"content":"One of Microsoft's \"Patch Tuesday\" security fixes is triggering a widespread \"Blue Screen of Death\" problem.\u0026nbsp; The cause is not the update itself, but an existing infection.\u0026nbsp; So far, reports suggest that this problem affects Windows XP and Windows Vista. Once the update is applied and the system rebooted, Windows will bluescreen at boot.\u0026nbsp; When booted to Safe Mode, the system will freeze. Removing the update from the Windows Recovery Console or using live media will get the system booting again, at least until the update is reapplied.\nI have found that the root cause is an infection of %System32%\\drivers\\atapi.sys, and that replacing this file with a clean version will get the system booting normally. This is not the first time that an infection hitting atapi.sys has caused updates to trigger bluescreens.\u0026nbsp; If you are running Windows and have not yet applied this update, make sure you scan your computer thoroughly for infections before applying this update.\u0026nbsp; If you are experiencing this problem, get your computer to a professional that can replace the infected atapi.sys and clean any other malware from your computer.\nReferences:\nhttp://isc.sans.org/diary.html?storyid=8209\nhttp://social.answers.microsoft.com/Forums/en-US/vistawu/thread/73cea559-ebbd-4274-96bc-e292b69f2fd1\nDetailed Repair Instructions Using the Windows XP Recovery Console 1. Boot from your Windows installation CD Insert your Windows installation CD and boot your computer. If your computer is not set to boot from CD first, you may need to reconfigure your BIOS or press a boot menu key (often F12, F8 or Esc). If you are unsure of how to do this, consult your favorite geek. As soon as the boot starts, you should see a message like \"Press any key to boot from CD...\" - press a key.\n2. Start the Recovery Console After the CD loads (it may take a minute), you will be presented with a few choices. One of these options is to start a recovery by pressing \"R\". Press \"R\" to launch the Recovery Console.\n* You may be asked to choose a Windows installation. If so, choose the damaged installation (probably \"1\").\n* You may be prompted for the Administrator password. If you do not have one, press \"Enter\".\n3. Identify your CD drive letter You should now be at the command prompt. Enter the following command:\nmap\nLook for the drive letter for your CD drive. It may look something like this:\nD:\\Device\\CdRom0\nIn this case, your CD drive is \"D:\".\n4. Replace ATAPI.SYS Enter the following, replacing \"D:\" with your CD drive:\ncd system32\\drivers\nren atapi.sys atapi.old\nexpand D:\\i386\\atapi.sy_\nYou should see the message \"1 file(s) expanded.\" - this indicates you have succeeded.\n5. Reboot and scan for malware Reboot your computer. With a little luck, your computer will now boot normally. Because this problem is caused by malware, you should immediately scan your computer with up-to-date antivirus software.\nUPDATE:\nAn atapi.sys infection may not be the only cause of this blue screen. While it does seem to be the most common cause, other infected drivers or drivers that make incorrect references to the updated kernel bits may also cause blue screens after this update is applied. Make sure you scan any computer with up-to-date antivirus software that can detect rootkits and check for updated drivers for your computer before applying this update.\nUPDATE 2:\nI have placed these instructions on my wiki. Any further changes will be posted there.\nComments from this post were discarded during a website migration.\n","date":"2010-02-11","externalUrl":null,"permalink":"/blog/microsoft-update-kb977165-triggering-widespread-bsod/","section":"Blog","summary":"","title":"Microsoft Update KB977165 triggering widespread BSOD","type":"blog"},{"content":" \"Ransomware\" is a type of malware that holds files or computer operations for ransom. In the most common scenario, ransomware will encrypt files on an infected computer and demand that the user pay for the decryption key. Ransomware presents an unusual threat in that simply removing it from the computer does not solve the problem. When files have been encrypted, removing the ransomware does not make them available again. The files must be decrypted. We have been extremely lucky so far. Most ransomware uses vulnerable encryption, like a simple XOR cipher, or a common key that need only be compromised once and then distributed to affected users. The distribution of ransomware has also fallen short of that of other threats like scareware. The number of people affected by ransomware has so far been small, and security researchers have been able to distribute unlocking tools capable of defeating the ransomware, but how long will we be so lucky? It may only be a matter of time until a more sophisticated, widespread ransomware assault hits the ill-prepared. When ransomware uses strong encryption and uses unique keys for each victim, security researchers may be unable to offer unlocking tools and a victim's only recourse would be to pay the ransom. When a widespread attack hits, the damage could be devastating, and the returns for the attackers would certainly provide inspiration and funding for further attacks. Some ransomware uses SMS short codes to take payments, which may allow attackers to hide the final billing amount or apply recurring charges and may allow panicked and unsuspecting minors to unknowingly make the payment without first alerting their parents. Introducing mobile providers into the mix may also affect the ability of the victim to recover the charges. Scareware distributors have already figured out successful models, their threats are already close to the behavior of ransomware, and they certainly have the resources to develop more advanced ransomware. Perhaps scareware can serve as a preview into what ransomware may do in the future. The protection is the same simple, long-standing advice: backups. If the important stuff is backed up, then a computer infected with ransomware can be cleaned and returned to service without concern for encrypted files. If you find yourself infected with ransomware and do not have backups, find a computer service company with security expertise that may be able to recover your locked data. If you have paid the ransom, notify your credit card company or mobile carrier and get your computer cleaned by a professional as quickly as possible. If you do not have a backup routine, now is the time to create one. You may be on borrowed time. Comments from this post were discarded during a website migration.\n","date":"2009-11-30","externalUrl":null,"permalink":"/blog/borrowed-time-threat-ransomware/","section":"Blog","summary":"","title":"On Borrowed Time: The Threat of Ransomware","type":"blog"},{"content":"The SANS Internet Storm Center recently featured a post about the increasingly stubborn fake anti-malware \"scareware\" that has been remarkably successful at infecting machines and convincing people to purchase the fake software. http://isc.sans.org/diary.html?storyid=7066 Among the comments on that post, others who have encountered this kind of malware questioned what protection measures might be effective. Of course, the common ideas of fingerprinting the malware by filename or checksum came up, as did potential ideas for hardening the operating system in small ways. Unfortunately, this malware has already proven that those measures are not enough. This malware changes too fast for fingerprinting, heuristics are unreliable, and improvements in privilege separation in Windows have been ineffective at blocking this malware. Removal of this malware can be a tedious and time-consuming task. I have used Fedora Live media or host systems to perform manual cleaning, then spent hours repairing the damage. If a system is infected, I recommend wiping and reloading the system (as I recommend with any infection on any operating system), but it is much easier to prevent infection in the first place. So, if traditional methods are ineffective, what do you do? An alternative approach that does work against this and most other malware is application whitelisting. In this post, I will run through a quick-start guide to configuring the application whitelisting that is already available in Windows XP and newer releases. There are also more powerful third-party options, and similar capabilities are available for other operating systems. Application whitelisting is not for everyone, but if you manage a large number of computers that need to be defended, it is definitely worth considering.\nApplication Whitelisting Application whitelisting allows an administrator to restrict what programs may run on a computer to a trusted list, instead of a normal configuration where all programs are allowed unless explicitly blocked. This is a highly effective way to block malware and unwanted programs from being installed or used on the system. The configuration outlined below is ideal for office setups where most users run as restricted users and only a few people may have administrative access. When set up on a domain, this can dramatically reduce the threat of malware, but it does take a lot of freedom of choice away from the users and increases the administrative burden. Running as an administrator bypasses the protection of this configuration. Windows XP and later have built-in support for application whitelisting. This support is not as robust as that provided by third-party application whitelisting products, but can still be used effectively. Prior to Windows 7, this feature is available as \"Software Restriction Policies\". In Windows 7 and later, it is available as \"AppLocker\".\nSoftware Restriction Policies Software Restriction Policies is a system policy feature that can be configured through local system policies or through Group Policy. To configure Software Restriction Policies as part of Group Policy:\nLaunch gpedit.msc (on servers, use the Group Policy management tools; to edit local policy instead, use secpol.msc) Select \"Computer Configuration \u0026gt; Windows Settings \u0026gt; Security Settings \u0026gt; Software Restriction Policies\" In the \"Actions\" menu, choose \"New Software Restriction Policies\" With \"Software Restriction Policies\" selected, double-click \"Enforcement\" in the right pane Enforce the policy for \"All software files\" and \"All users except local administrators\" and hit \"OK\" Double-click on \"Designated File Types\" Select the \"LNK\" extension and hit \"Delete\" - this allows the Start Menu and Desktop shortcuts to work normally Hit \"OK\" Under \"Additional Rules\", you can edit the whitelist with paths, hashes, certificates and Internet zones By default, the Windows installation and system folders will be unrestricted, as will \"Program Files\" Select \"Security Levels\" Right-click on \"Disallowed\" and select \"Set as default\", then click \"Yes\" when prompted to confirm the change With these settings, administrators will continue to be unrestricted, while regular users will only be able to run programs that have been installed under the Windows installation folder (typically \"C:\\WINDOWS\") or under \"Program Files\". They will be able to launch those programs using shortcuts. Administrators must handle program installation on behalf of the system's users.\nNetwork Shares and Mapped Drives When adding rules for network shares and mapped drives, you must use the UNC path in the rule. Using the mapped drive letter will not work.\nMore Information More information on Software Restriction Policies is available from Microsoft TechNet at: http://technet.microsoft.com/en-us/library/bb457006.aspx\nComments from this post were discarded during a website migration.\n","date":"2009-09-06","externalUrl":null,"permalink":"/blog/defending-windows-with-application-whitelisting/","section":"Blog","summary":"","title":"Defending Windows with Application Whitelisting","type":"blog"},{"content":"","date":"2009-07-19","externalUrl":null,"permalink":"/tags/microblogging/","section":"Tags","summary":"","title":"Microblogging","type":"tags"},{"content":" With the rapid growth of Twitter and other microblogging services has come the rise of numerous URL shortening services. Some, like TinyURL, existed long before Twitter, but they all share a common problem that has been exacerbated by the increasing use of microblogging. They are a perfect mask for spammers. With these URL shortening services, anyone, including a spammer, can push any URL into the service and get back a shortened version under the domain of that particular shortening service. When the new, shorter URL is posted on a microblogging service or anywhere else, viewers cannot easily determine where the link might take them. Since almost everyone on microblogging services is using the shortening services, it is impractical to avoid them or to blacklist the associated domains. There are ways to safely extract the end URL from the short version, but those methods are not currently readily available to the majority of users. Bit.ly, one of the shortening services, recently introduced warnings when they detected that a URL target might be malicious or intended for unsolicited use (spam). This is a start, but it is still retroactive. TinyURL offers, through a setting that users can enable (stored in a browser cookie), the option for users to see a preview of the target URL before being redirected. Bit.ly offers a Firefox extension for the same purpose. These options are better, but they still require that the end-user take some action for the extra protection. For administrators of sites like Twitter, there are very few options for screening shortened URLs. Because of the variety of shortening services available, and the ability for users to pick from any of them (and post any arbitrary URL), the only way to dereference the URLs in a widely-supported manner is to actually attempt a simple HTTP request to each posted URL and, if the response code is 30x, then look at the \"Location\" response header. Thus far, I have seen no evidence of a major microblogging service doing any filtering on shortened URLs, but I would not expect them to disclose their anti-spam measures if they did. Since malware and spam attacks are targeting microblogging services with increasing frequency, filtering and blacklisting of URLs may soon become a necessity. I can imagine two things that would go a long way toward protecting users from the increased threat and return the balance. First, browsers should include support for dereferencing links without visiting the targets, actively notifying users of the target URL when they are being redirected, and have that feature enabled by default only where the target is on a different domain. Second, microblogging hosts should introduce filtering of shortened URLs by checking all links posted on their services for redirects and then filtering those redirection targets, and they should coordinate blocking efforts to increase the effectiveness of the filtering. Spammers are already taking advantage of shortened URLs, and the problem is only going to get worse unless we take action to destroy the advantage that shortening services currently give them. Update: The suggested browser feature from above has been proposed for Firefox at: https://bugzilla.mozilla.org/show_bug.cgi?id=453077\nComments from this post were discarded during a website migration.\n","date":"2009-07-19","externalUrl":null,"permalink":"/blog/url-shortener-design-flaw/","section":"Blog","summary":"","title":"URL Shortener Design Flaw","type":"blog"},{"content":"","date":"2009-06-23","externalUrl":null,"permalink":"/tags/android/","section":"Tags","summary":"","title":"Android","type":"tags"},{"content":" I was convinced long before purchasing my G1 from T-Mobile that it would be a worthwhile investment, and I have not been disappointed at all. In Amarillo, T-Mobile does not yet have 3G service, so data speeds leave a bit to be desired, but it still works. I learned earlier this evening that I may be taking a trip to San Antonio in the near future, unfortunately not on pleasant business. I know that I will get 3G service there, and I do intend to take my notebook, but I do not have a cellular card for my notebook, and it is an old model that does not have built-in wireless. Getting Internet to my laptop on the go has always been a hassle - but what if I could use my G1 to get Internet to my laptop? It did not take much searching to find Tetherbot, which provides some basic tunnelling for a computer connected via USB to an Android-powered device. Using Tetherbot and the Android SDK, I was able to establish a tunnel for Internet browsing within a couple of minutes. Now I just need to finish upgrading my notebook to Fedora 11 (a hassle of its own) and I will be ready to go! Comments from this post were discarded during a website migration.\n","date":"2009-06-23","externalUrl":null,"permalink":"/blog/android-powered-mobile-internet/","section":"Blog","summary":"","title":"Android-powered Mobile Internet","type":"blog"},{"content":" If you are familiar with security issues for Internet servers, you know what a Denial of Service (DoS) attack is, and that there is no absolute defense against DoS attacks. There are plenty of ways to mitigate the risks. With just a few mitigating tactics, the biggest threat that remains is usually from Distributed Denial of Service (DDoS) attacks, where it is a game of sheer numbers. Wednesday, a tool was released that changes that for servers running Apache, Squid or any one of several other HTTP servers and proxies. This tool is able to bring down these servers by forcing them to open a large number of processes and keep the connections open using only a minimal amount of bandwidth. This means that an attacker with a low-bandwidth connection may be able to bring down a server on a much higher-bandwidth connection. A distributed attack using this technique could be absolutely devastating, even to larger server farms. The attack is very simple, much like many other DoS attacks. Unfortunately, there are very few mitigation techniques known at this time, and like other DoS defenses, many of these techniques require balancing security and accessibility. Looking at this attack, I can imagine small variations that could potentially affect any HTTP server. I do not mean to cause alarm, but I strongly suspect we will see many more attacks like this one targeted at the major HTTP servers and able to bring down those servers with far fewer resources than attackers currently need to pull off a successful assault. Similar attacks may also be possible against other types of servers, such as mail servers or remote administration servers. If you are interested in testing against your own servers, you can find the tool at: http://ha.ckers.org/slowloris/ More discussion can be found at the SANS Internet Storm Center: http://isc.sans.org/diary.html?storyid=6601 http://isc.sans.org/diary.html?storyid=6613 Comments from this post were discarded during a website migration.\n","date":"2009-06-21","externalUrl":null,"permalink":"/blog/apache-dos-tool-first-new-wave/","section":"Blog","summary":"","title":"Apache DoS Tool - First of a New Wave?","type":"blog"},{"content":" A couple of days ago, I decided to consolidate my online identity around my real name. I had been using the same nickname on the Internet for over a decade, and had built a significant identity around it, but its original meaning was no longer relevant, and having both the nickname and my real name for online identities was reducing the impact of both. I have already changed \"nman64\" and \"n-man\" in many places to \"patrickwbarnes\". I have registered patrickwbarnes.com and redirected n-man.com to it. It will take some time to get everything changed over, and I intend to keep old contact information working for the foreseeable future. I will continue updating the links on this site and the information on the \u0026quot;Contact\u0026quot; page as the changes continue. With the change in my domain name, I decided there was no better time to replace my old, outdated website with something new. I have been thinking about what to replace it with for some time, and I settled on a WordPress blog. I have tried blogging several times before and have always let it fail. Lately, there have been several times that I have thought about writing something, but have not had a blog to write it on. Since I am consolidating my website into a WordPress blog, perhaps that will give me more incentive to maintain it this time. Only time will tell. Comments from this post were discarded during a website migration.\n","date":"2009-06-13","externalUrl":null,"permalink":"/blog/reboot/","section":"Blog","summary":"","title":"Reboot","type":"blog"},{"content":"Work I do product management and DevOps engineering for Def-Logix, Inc., a company that provides software research and development services.\nConsulting I work with a team of talented professionals to provide advanced levels of consulting and support for a wide array of technology solutions. My team and I support clients in the San Antonio and Amarillo areas in particular, but we also provide remote support to clients around the nation. Open Source I am a proud supporter of open source technology. Over the years, I have contributed to open source in a variety of ways, including code contributions, education, project administration and implementation. More of my professional background is available on my LinkedIn profile. XKCD: Devotion to Duty\n","externalUrl":null,"permalink":"/about/","section":"Welcome!","summary":"","title":"About","type":"page"},{"content":"","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":" Send a Message Signal: patrickwbarnes.83 Mastodon/Fediverse LinkedIn ","externalUrl":null,"permalink":"/contact/","section":"Welcome!","summary":"","title":"Contact","type":"page"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"},{"content":"Your message has been sent.\n","externalUrl":null,"permalink":"/thanks-contact/","section":"Welcome!","summary":"","title":"Thank You!","type":"page"}]