Page 1 of 1

Is There a Limit on the Size of Created Hard Disk Images?

Posted: Fri Mar 06, 2015 5:39 pm
by Old-School-BBSer
Being as my SheepShaver hard disk file was getting very low on available space -- I run a Hotline file server on it among other things -- yesterday evening I was forced to create a new hard disk file, reinstall Mac OS 9.0.4, and then move everything over from the original hard drive file, to the new one.

Initially, I wanted to make a 20 GB hard disk file. However, even though I made several attempts to do so in SheepShaver's preference pane, nothing happened.

Finally, I settled for just an 8 GB hard disk file, and it accepted that, and created it.

So I am now wondering: is there a limit on the size of the disk that can be created? Is the emulator limited in this way?

Also, is there an easy way to make an .dsk hard drive file with Yosemite?

I tried using Disk Utility, but I found no options to make a .dsk image, and there doesn't appear to be any options to do so with the conversion button either.

I actually made a 20 GB DMG file, and then just changed the file extension to .dsk instead of .dmg, but that didn't work, even though some folks online claimed that it would work. SheepShaver did not recognize the disk when I loaded it into its volumes list.

Neither could I find any utilities online for OS X which would allow me to create .dsk files.

My thinking was that if there is a limitation on how big of a hard drive file the emulator can create -- and I don't know that there is -- then maybe I could make one some other way, outside of SheepShaver, and then load the disk into its volumes list.

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Fri Mar 06, 2015 6:22 pm
by kikyoulinux
I created a 4GB image using bximage(raw disk image creator of Bochs). And it seems larger images won't be recognized and initialized by Mac OS...

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Fri Mar 06, 2015 8:01 pm
by adespoton
There's an older discussion thread on here about this topic. The general rule of thumb is that System 7 and lower shouldn't have images greater than 2GB, and SS should stay at 8GB or lower, IIRC. I remember 4GB fitting in there somewhere too.

But you can have as many of them as you like.

Also, I believe most classic Mac OSes have a max file size of 2GB (meaning any given file in the filesystem has to be 2GB or less). I can't recall when that size limit was increased, but I remember writing larger DVI files to MacOS 9.0.4 from my digital video camera back in the day, so this limitation was fixed by then.

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Fri Mar 06, 2015 8:43 pm
by Ronald P. Regensburg
The file name extension of the disk image file is irrelevant to SheepShaver. The image file will also work without any extension. If the volume is initialized HFS+ (in MacOS 8.1 or later) and the image file is given a .dmg extension, it can be mounted and used in both MacOS and OSX (though not at the same time). A read/write .dmg file created by Disk Utility will also work fine with SheepShaver and MacOS 8.1 or later.

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Fri Mar 06, 2015 11:20 pm
by Old-School-BBSer
From everything that everybody is saying here, are you suggesting and agreeing, Ronald, that the disk image will be recognized -- regardless of the extension -- as long as it is below 8 GB in size, and as long as it is formatted correctly?

I made the 20 GB disk image a read/write disk, Mac OS Extended, but not journaled, no encryption. I can't remember what I selected next to "Partitions". I think it was "Single Partition - Apple Partition Map".

Is that why SheepShaver did not recognize my 20 GB disk image? That is, because it was too big? That seems to be consensus here.

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Sat Mar 07, 2015 12:04 am
by Ronald P. Regensburg
You can use a 20 GB disk image with SheepShaver.

Create the disk image with Disk Utility as you described, but do not choose a specific partitioning scheme. The option "Hard disk" will do fine.

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Sat Mar 07, 2015 6:13 am
by Old-School-BBSer
Okay, I just realized that the way that I worded my previous comments did not accurately describe what happened.

What I should have really said is that after I made the 20 GB disk image with Disk Utility, I was able to select it and add it to SheepShaver's "Volumes" tab.

However, what I meant by it not being recognized by SheepShaver, is that when I launched SheepShaver -- either directly, or by using the Terminal script -- it did not mount on SheepShaver's desktop, and I was not presented with a dialog window, asking me if I wanted to initialize it.

I tried several times, but to no avail. It was only when I randomly chose to make an 8 GB disk image, that everything worked as expected, and I was able to install Mac OS 9.0.4 on it, from Apples Legacy Recovery Library disk image.

Anyway, I just tried again -- this time using "hard drive" as the partition option -- and after several attempts, I was finally able to install MacOS 9.0.4 on my 20 GB disk image, and it is booting properly.

Right now I am doing what I did on previous installations. That is, I am removing all of the unnecessary crap that Apple installs, and which I don't need, including all of that Microsoft junk, other unnecessary apps, extensions, control panels, Apple Menu items, print drivers, etc.

With this 20 GB hard drive, 512 MB of RAM, and a cleaned out system, Mac OS 9.0.4 is very snappy under SheepShaver. :)

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Sat Mar 07, 2015 6:21 am
by Old-School-BBSer
BTW, for those of you who are wondering what is safe to remove from your OS in order to speed up your emulator, but still keep SheepShaver and Basilisk running properly, here is a very interesting page I just found:

http://www.yale.edu/acsca/macguide/Syst ... older.html

This page is for Mac OS 7.5, but I bet if you Google a bit, you will find similar pages for whatever version of Classic Mac you are using.

I didn't even find this page until after I removed all of the excess junk from my installation. I just read the descriptions for each item that are found in the Extensions Manager control panel, and then made a personal determination from that. :)

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Wed Jun 22, 2016 1:59 pm
by sparcdr
Hi there.

Although it has been suggested that you can use dd to create larger volumes, this has not been the case for me. Instead I was forced to create 7 4000MB (3.8GB) HDD volumes using DD for the first, then I simply copy and pasted the rest, adding each to the disk tree in the GUI.

Creating a disk over 4GB using (20GB for example) had odd results, such as showing the option to format it as 2.7GB, which it wasn't physically on the host. Anything north of 60GB (Which can be found and is supported for real PowerBook G4 units via aftermarket 2.5" 5400RPM drives) results in a "Select a volume to format" with 0 (Bytes) size.

As indicated, IDE ATA/66 (LBA32 vs LBA48) is limited to 137GB due to 28 bits sector addressing despite it being called LBA32, though it wasn't truly. 120GB SSDs with the proper adapter can work with ATA/66, but it wouldn't be very easy to find an affordable solution these days. As most units after/around 1996 from Apple stopped using proper SCSI and only IDE in most (all?) models, there are really three things I consider about the limit.

Most ROM dumps are likely missing support for SCSI for PowerPC architecture (Most SCSI was used during the M68K transition), though aftermarket drives and PCI cards had been made, it would likely be unbootable even if it were emulated in Sheepshaver. SCSI back then only really got as big as 9GB (Cheetah / Ultra I on DEC tech), and 73GB later (Cheetah / Ultra II LVD on SGI tech; 146GB exists, but was uncommon and was/is quite expensive) while 80GB was the biggest mass produced IDE disk that could be used in practical sense. There are/were 250GB and 500GB variants out there, but the specs call for LBA48 or multiple partitions, dependent on how the OS/firmware sees/maps the disk as a whole. I had my own experience with Pentium III era machines not being able to partition a 250GB IDE disk due to sector bounds issues with the BIOS so YMMV.

LBA48 is absent on the majority of systems we'd be emulating against for any practical case. My PowerBook G4 Titanium 667 is ATA/66 and it was made in 2002. The last Powerbook model capable of natively booting OS 9 without the FW800 to FW400 hack was the 1.0/867GHz G4 titanium Powerbook also from 2002 (M8859LL/A) and that too only had ATA/66 as it used the same chipset. The common denominator is that no Apple supported machine on either the M68K or PowerPC architecture ever supported ATA-6 (LBA48) so implementing it would require low-level work to the OS, a custom boot loader, and associated firmware for an OS that never had source code available.

UATA specifications from Western Digital (ATA-6 / LBA48) were written in 2002, and likely took about 2 years after that to be implemented/adapter on PC platforms. Apple killed off OS Classic support in hardware in 2003 as these 2002 units were the last to run it, so that was never going to become a reality for us.

I have 7 individual hard disk volumes because the CD-ROM also counts as one IDE device. Sheepshaver as well as the firmware in accordance with the spec also in part due to master/slave architecture is limited to 8 volumes and Sheepshaver itself doesn't yet allow multiple IDE/SCSI controllers so there really isn't a workaround except mapping host volumes using extfs or using Thursby's Dave to mount network shares. So basically I'm limited to 26.9GB of space for the vm.

UPDATE: SheepShaver indicates volumes as Floppy diskettes despite the size when using the Macintosh New World BIOS dump. It may be different for specific ROM images if anyone can reply to that theory.

If I've made any omissions or mistakes, feel free to tell me so I can edit this post. Thanks.

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Thu Jun 23, 2016 2:44 pm
by 24bit
Presumably there is a limit for SS virtual volumes, but its way beyond 60GB.
(There is a small collection of fairly large images over at Macintoshgarden but the link is down atm due to migration of the server to a new more powerful home.)
Maybe the limit is 2TB as given by the file system, I have no host HDD kicking around for testing right now.
Here is a 120GB beast for example:

Image

If you wish, I´ll try to make a bz2 from it and post it somewhere.
I doubt such a big image will be of any use in daily work though. ;)

Edit: The bz2 is only 90 KB, so I will mail it to anybody in need of such a thing.
Note that it may take a very long time to inflate. :)

Edit2: The images for emulators are here, including the 120GB one: http://macintoshgarden.org/apps/disk-images-emulators

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Fri Jun 24, 2016 3:15 pm
by sparcdr
This makes things much easier for me as I have to test/repack many abandonware titles for their integrity and 4GB space per volume is really not enough. I hit up Macintosh Garden the day before they went into maintenance and also found some more cool things at Macintosh Repository for my nostalgia needs. My host uses M.2 SSD (Micron M600, 512GB) and a conventional Evo 830 (256GB) as well as a 7200RPM SATA (~186mb/s) on a 4GHz 4790K CPU with 32GB of RAM running Windows 7 x64 SP1.

UPDATE: It didn't take too long, though it uses about 4GB of RAM (Host cache, not bzip2 itself) to accomplish during expansion. Due to the linear nature of this task, it held 128mb/s on average when I watched it in resource monitor. This sort of activity is not great for the health of a SSD, but the MTBF is decent these days so I'm not too worried as the benefits outweigh the negatives in the case of emulation and other intensive operations. Thanks for pointing me to the files.

Code: Select all

powershell -Command {Measure-Command {start-process bunzip2 -argumentlist "--fast --keep 120GB.img_.bz2" -wait}}

Days              : 0
Hours             : 0
Minutes           : 12
Seconds           : 50
Milliseconds      : 638
Ticks             : 7706383863
TotalDays         : 0.00891942576736111
TotalHours        : 0.214066218416667
TotalMinutes      : 12.843973105
TotalSeconds      : 770.6383863
TotalMilliseconds : 770638.3863

Code: Select all

PS D:\> ls .\120GB_.img
    Directory: D:\
Mode                LastWriteTime     Length Name
-a----        6/24/2016  10:16 AM 1228800204 120GB_.img

Code: Select all

PS D:\> bc
bc 1.06
Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
1228800204 / 1024 / 1000 / 10
120

Code: Select all

PS D:\> bc
bc 1.06
Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
1228800204 / 1024 / ((12 * 60) + 50) / 10
155
Sustained rate is around 155mb/s.

Mac OS Standard (HFS) uses 1MB blocks according to http://macintoshgarden.org/apps/disk-images-emulators
Image

It was recommended to use HFS+ to get around this, but in actuality you cannot format the disk again using the pre-created images. The following will happen:

Reformat as Mac OS Extended (HFS+)
Image

Now requires 194MB instead of 3179MB. (16x less space than is required with HFS without plus due to block sizes, but you lose 114683MB as the partition map presumably gets mangled)
Image

Reformatting as Mac OS Standard again will not restore the missing 114683MB, nor halting, nor rebooting, nor another ROM, and finally it will still occupy 120GB of space on the host filesystem. The only point of using the 120GB image is for excessively large games such as click and point adventures with cinematics or titles which have large single files otherwise an 8KiB file will occupy 1MB each anyways. In theory you could use Linux fdisk with it mounted as loopback and manually re-create the partition table for the image and format using hfsutils to make it plus but that'd just be a waste of effort.

From the disk images page, that-ben wrote:
In other words, if you're planning on using this 30GB drive in Mac OS 9, you should consider using the other file (HFS+) instead, it will contain MANY MORE files even if it's the same drive size as this one.
I will be using 4 30GB volumes due to these limitations.

PS: I'm using 2MB New World ROM now and the disk shows up as "Internal Drive" now with that image.

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Fri Jun 24, 2016 4:37 pm
by adespoton
The benefit of using HFS instead of HFS+ is that all the pre-8.1 software can still be extracted to be used on older systems.

I had a 128GB volume I was using, but since I've got an OS X host, I switched to just putting it all in the shared folder for sorting/cleaning/verifying/repacking. MUCH less resource intensive that way.

Oh yes, make sure to clean all the WDEF-A and nVIR-B infections from the stuff you find in the sites you mentioned :) Otherwise, the problem will just get worse. A quick run of the latest version of Disinfectant should do the job nicely.

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Fri Jun 24, 2016 4:40 pm
by sparcdr
adespoton wrote:The benefit of using HFS instead of HFS+ is that all the pre-8.1 software can still be extracted to be used on older systems.
I found that out last night and have 1 volume without extended for that reason. The main thing that indicated an issue was errors on extraction and weird file names. SMIs can be problematic when transferring between different hosts, and sometimes it's wise to repack it in the guest but leave it archived. Though fully extracted dsk and img files (Floppy dumps) for the most park don't seem to have this problem. I'm using diskcopy, toast 5.2.1, and a mix of stuffit 5.5 and 7.x to handle the many formats we encounter.

After spending many hours, it was better to extract things such as Toast images that were doubly wrapped at the cost of more disk space because the time it takes to decompress is not worth it or efficient to do on a vm or older G3/G4 unit. I may have to just get creative with my G4 Mini and setup Appleshare and mount larger files over the LAN, but the smaller games/apps aren't as troublesome to juggle around using extfs. Some files are questionably saved in bin/cue, some straight cdr (iso), some mds/mdf (hybrids, which would require re-burning as it uses proprietary Alcohol format). I'd like to bring more order to the chaos and completed my first round of organizing so far.
adespoton wrote:Oh yes, make sure to clean all the WDEF-A and nVIR-B infections from the stuff you find in the sites you mentioned :) Otherwise, the problem will just get worse. A quick run of the latest version of Disinfectant should do the job nicely.
Good tip, thanks.

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Fri Jun 24, 2016 6:19 pm
by adespoton
As someone who's spent over 20 years grooming classic Mac file sets ("classic" keeps getting newer), I feel your pain regarding format issues. It's even worse when you throw Windows into the mix; on OS X, the only issue you have is reading MFS images, which I get around by using a dedicated Mini vMac with the file transfer apps loaded.

You might want to install the HFS+ driver for Windows and create a partition that can handle resource forks; this is where the issues you're having arise from. The type/creator codes for Mac files are in the resource fork, so when you save them to a FAT, EXTFS or NTFS partition, they lose all that data. Further, some variants of SMI and SIT store actual data in the resource fork as well -- I believe Compact Pro might also do this. For these, unless you can re-create the fork by hand (the data lost is about the layout of the data fork, not the actual data that's compressed), the data inside is basically inaccessible. Of course, any executable code is stored exclusively in the resource fork, so letting uncompressed software touch a non-HFS/HFS+ partition is a bad idea (even via HFV Explorer, which usually, but not always, splits the forks out and recombines them when written back to an image).

You've also come across an alternate method that works: store the data on mixed-mode CD/DVD images. This can get a tad unwieldy though, which is why I eventually switched to storing everything on an HFS+ volume and setting the shared folder path to that location.

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Fri Jun 24, 2016 11:32 pm
by sparcdr
With the exception of MFS I'm not too worried about my data not working with pre-8.1 environments. The majority of the hybrids are as I indicated click-through adventures and cinematically heavy games/education software. For the most part, if you leave the bottom-most level of archive (.sit.bin -> .sit, otherwise) alone as it was, or repackage it properly using Stuffit Deluxe 5.5 due to the mention of fork bombs/trojans after cleaning them up (Though most of my collection is post 1995, pre 2000 so far, which post dates the two viruses specifically mentioned) it isn't an issue generally to move things around, even with Windows hosts.

For the sake of that possibility and the given risks, I have setup an HFS+ volume on an external USB HDD despite the lackluster 1.1 or 2.0 speeds, with the exception of the larger images and have shared it from my Mac Mini G4 1.42 using Appleshare over TCP/IP. I was having random problems with DiskCopy mounting .img/.dsk files, and sometimes simply had to do a trivial reboot, but those formats while at first glance are practical, they have caused me so much so far more problems than Stuffit archives due to mounting and checksum errors alone. As I get further in testing everything that has been archived, maybe I'll learn something more about the reasons as I identify other common causes which have yet not been mentioned.

Adespoton's observation about the integrity of resource forks is the most important and most common cause of many of these problems. Though I know better than to extract the archives down to the level where the binaries themselves are, even that rule has not assured the level of expectation that I assumed would be true initially. I learned that it's better not to assume the outcome regardless of the original format or that it will be just as simple to transfer the files without worry.

I am conventionally a UNIX power user and hobbyist developer, but do use Windows in a professional sense as well as OS X. I own DEC, SGI and Sun gear, and only experienced classic in my grade school years, so some habits I have developed from any of those can be outright wrong when working with these vintage systems. Part of reminiscing and resurrecting for the nostalgia value about these systems involves throwing away a lot of modern mentality about the want/need for things to "Just work". It's all worth it to me in the end to get a better understanding of how things work, and that's why many of us still bother, even when others think we're simply just crazy for it.

Image

Re: Is There a Limit on the Size of Created Hard Disk Images

Posted: Mon Jun 27, 2016 3:34 am
by sparcdr
I'm adding some things I have learned going through the collection about precautions and results based on information already provided and actual effort.

Many IMG and DSK files use resource forks and should not be unpacked even if you feel the need to unless you repack them on a classic or OS X environment on HFS+ or they could and/or mostly will not mount again. IMG and DSK files don't consistently mount or verify their integrity if at all when the host is using NTFS over extfs. They again must be copied over in this case, and again still archived as they were. Some IMG/DSK files that otherwise throw issues when you extract them against the warning I ignored may mount without issue on a native OS X platform using HFS+ (10.4.11/PPC) but most will not after you have gone this far.

Toast files don't appear to mount correctly using extfs until they are copied to the machine regardless of it they are over 4GB or not. Basillisk II/SheepShaver on Windows must use SLIRP, and as a result can't cross the network boundaries as it is hardcoded to run using the 10.0.2.0/16 netrange which prevents SMB/CIFS via DAVE 4.0 and AFP ability as well.

A crossover with a second dedicated ethernet adapter on the host will not necessarily work either, depending on the priority of the network devices and which one is chosen typically by the emulator. TUNTAP is supported on OS X though reports suggest some issues above 10.7 (Lion), but the selling point is that you gain the flexibility that allows one to get around many of these these issues as it virtualizes an interface, bridges another, and permits setting specific network information that otherwise would be fixed.

TUNTAP on FreeBSD and Linux platforms are unaffected by the reported issues. AFP support was removed in 10.6 when Apple made OS X Intel only, but other FOSS systems still bundle or provide Netatalk such as Slackware Linux as recent as 14.2. Using *NIX hosts with Netatalk behind the same switch despite different netmasks/ips will generally work even if the host itself is serving the os as a vm, so long as the guest network is in promiscuous mode, firewall permitting, to allow the flow of packets with varied source/destination routes and other typically permitted details, which aren't so much in a datacenter setting. The underlying filesystem where the share lies must be HFS+ due to the resource forks, and there are no promises that Samba or native NT file mappings will not destroy them just as on other platforms as a result of the transitive nature of the CIFS/SMB protocols which are designed to maintain compatibility with a range of systems unable to use HFS+ characters such as the colon.

In regards to the 3rd party AFP options formerly available on Windows, many vendors ceased development though coincidentally when Microsoft began encouraging secure mode via (U)EFI with OEMs just before the release of Windows 7 but also at the same time as Steve Jobs stated that Firewire was dead in 2008. In part the key signing requirements and inability to load unsigned drivers at the same time accelerated the decision for many OEMs creating drivers to give up on the effort. Other systems which implemented AFP/TCPIP as a userland process still work natively today, even though largely unchanged since that time.

Workarounds for Windows 7 and Windows Server 2008 users attempting to use older options not designed for their system or had expired signing certificates are similar to those which also needed to load other unsigned or experimental drivers when at time necessary while emulators such as Charon AXP+ or the former PS3 Sixaxis controller driver also have had to use insecure boot mode. As Microsoft has Services for UNIX, it previously had Services for Macintosh back in the NT 4.0 days. Microsoft gave up on Apple File protocol support in 2000 just after the release of 2000 Professional/Server but continued supporting it in Windows server 2003. Those systems while using the 2.2 version of the protocol can still provide this support under virtualization today without Kerberos/Active Directory support. [1] [2]

For Windows users using older non-virtualized solutions, this has meant modifying the bcd (Using bcdedit itself or a tool such as easybcd, which abstracts the details of MBR vs EFI details itself just focusing on the needed changes to the flags) to permanently leave the system in development/unsigned mode, and then patch the shell32 dlls using the universal watermark removal tool or UWD to make it less apparent that the host system is in unsigned/developer mode.

Hosts booted into unsigned mode cannot play online with some modern games such as those employing Valve Anti-cheat (VAC), BattlEye (ARMA2/3), or EasyAntiCheat (Rust) due to the fact that kernel debuggers are not detected at the same level than is typically expected by the detection engine. As this causes a concern for the maintainers, the anticheat engines will then fail to initialize knowing the system isn't in secure mode, and then as a user you end up simply being unable to join any server that is protected by these measures. So if you need to do this, we should assume the machine is dedicated to running emulators for legacy operating systems or only computational tasks.

AFP shares hosted from a real machine (My G4 / 10.4.11) even when connected via the same switch in the same LAN did not solve the issue due to the network topology/broadcast issues. Connecting directly in any direction even though the host is technically aware of the vm did not work either and I didn't expect it to as subnets were created for the purpose of allowing more nodes to connect to a given network/range/network/domain/topology at the cost of isolating the others.

Most toast images can be converted to ISO and then be mounted by Basillisk II / SheepShaver directly, requiring a halt each time so they appear to be physically attached to the vm as the software can't do real-time changes to mounted devices. Archives in .bin, .bin.hqx, .sit.bin, etc. should be copied and extracted every single time from the host and on the vm itself. Converting the toast image to an iso/cdr will not magically fix the issue where it as mentioned can't be mounted using extfs. The remaining bugs trigger issues with the archivers, which then will throw "An unexpected error prevented mounting this volume" or similar.

This reply is mainly for those using Basillisk II or SheepShaver, but may also affect those running actual hardware in an environment that has similar constraints. While most of these issues can be worked around, the most aggravating of them all is that toast images cannot be mounted directly when using extfs to expose host mounts when the host format is NTFS or FAT32 by derivative limitation. The result is a worn out disk ruined before its time. Amusingly you will need a new mouse in any case after you get done organizing as the sheer volume of files will require thousands of clicks at minimal.

I hope this thread helps someone.

[1] http://www.acronis.com/sites/default/pu ... 072011.pdf
[2] https://technet.microsoft.com/en-us/lib ... s.10).aspx