Home» » Sun Solaris 10 Download X86 Dvd Iso

Sun Solaris 10 Download X86 Dvd Iso

0Home
Sun Solaris 10 Download X86 Dvd Iso

Oracle and Sun SPARC and x86 systems, Fujitsu SPARC64 systems. See the Hardware Compatibility List for non-Oracle x86 systems. Download.iso of Solaris 10 (and write it into DVD or CD pack (5 CD's) using ISO. Sep 30, 2013. For example, on a SPARC Solaris 10 Update 9 system, you will see an s10s_u9wos entry or you might see s10x_u8wos for x86 Solaris 10 Update 8. All files referenced on this. Download one of the files - say sol-10-u11-companion-ga.iso.bz2 to a directory, for example, /tmp. Cd to /tmp and run bunzip2.

• Per directory: 2 48 • Per file system: unlimited Max. Filename length 255 characters (fewer for multibyte such as ) Features Yes (called 'extended attributes', but they are full-fledged streams) Attributes POSIX, NFSv4 ACLs Transparent compression Yes Yes Yes Yes Other Supported,, distributions,,, (only read-only support),, via third-party or ZFS-, ZFS is a combined and designed. The features of ZFS include protection against, support for high storage capacities, efficient data compression, integration of the concepts of filesystem and, and clones, continuous integrity checking and automatic repair, and native. The ZFS name is registered as a trademark of; although it was briefly given the expanded name 'Zettabyte File System', it is no longer considered an initialism.

Originally, ZFS was proprietary, developed internally by Sun as part of, with a team led by the CTO of Sun's storage business unit and Sun Fellow,. In 2005, the bulk of Solaris, including ZFS, was licensed as under the (CDDL), as the project. ZFS became a standard feature of Solaris 10 in June 2006. In 2010, Oracle stopped the releasing of source code for new OpenSolaris and ZFS development, effectively their closed-source development from the open-source. In response, was created as a new open-source development umbrella project, aiming at bringing together individuals and companies that use the ZFS filesystem in an open-source manner. This section does not any. Unsourced material may be challenged and.

(January 2017) () ZFS compared to most other file systems [ ] Historically, the management of stored data has involved two aspects — the physical management of such as and, and devices such as that present a logical single device based upon multiple physical devices (often undertaken by a,, or suitable device ), and the management of files stored as logical units on these logical block devices (a ). Example: A of 2 hard drives and an SSD caching disk is controlled by, part of the and built into a desktop computer. The user sees this as a single volume, containing an NTFS-formatted drive of their data, and NTFS is not necessarily aware of the manipulations that may be required (such as if a disk fails). The management of the individual devices and their presentation as a single device, is distinct from the management of the files held on that apparent device. ZFS is unusual, because unlike most other storage systems, it unifies both of these roles and acts as both the volume manager and the file system. Therefore, it has complete knowledge of both the physical disks and volumes (including their condition, status, their logical arrangement into volumes, and also of all the files stored on them). ZFS is designed to ensure (subject to suitable ) that data stored on disks cannot be lost due to physical error or misprocessing by the hardware or, or events and which may happen over time, and its complete control of the storage system is used to ensure that every step, whether related to file management or disk management, is verified, confirmed, corrected if needed, and optimized, in a way that storage controller cards, and separate volume and file managers cannot achieve.

ZFS also includes a mechanism for and, including snapshot; the former is described by the documentation as one of its 'most powerful features', having features that 'even other file systems with snapshot functionality lack'. Very large numbers of snapshots can be taken, without degrading performance, allowing snapshots to be used prior to risky system operations and software changes, or an entire production ('live') file system to be fully snapshotted several times an hour, in order to mitigate data loss due to user error or malicious activity. Snapshots can be rolled back 'live' or the file system at previous points in time viewed, even on very large file systems, leading to 'tremendous' savings in comparison to formal backup and restore processes, or cloned 'on the spot' to form new independent file systems. Summary of key differentiating features [ ] Examples of features specific to ZFS which facilitate its objective include: • Designed for long term storage of data, and indefinitely scaled datastore sizes with zero data loss, and high configurability.

• Hierarchical of all data and, ensuring that the entire storage system can be verified on use, and confirmed to be correctly stored, or remedied if corrupt. Checksums are stored with a block's parent, rather than with the block itself. This contrasts with many file systems where checksums (if held) are stored with the data so that if the data is lost or corrupt, the checksum is also likely to be lost or incorrect. • Can store a user-specified number of copies of data or metadata, or selected types of data, to improve the ability to recover from data corruption of important files and structures. • Automatic rollback of recent changes to the file system and data, in some circumstances, in the event of an error or inconsistency. • Automated and (usually) silent self-healing of data inconsistencies and write failure when detected, for all errors where the data is capable of reconstruction.

Data can be reconstructed using all of the following: error detection and correction checksums stored in each block's parent block; multiple copies of data (including checksums) held on the disk; write intentions logged on the SLOG (ZIL) for writes that should have occurred but did not occur (after a power failure); parity data from RAID/RAIDZ disks and volumes; copies of data from mirrored disks and volumes. • Native handling of standard RAID levels and additional ZFS RAID layouts ('). The RAIDZ levels stripe data across only the disks required, for efficiency (many RAID systems stripe indiscriminately across all devices), and checksumming allows rebuilding of inconsistent or corrupted data to be minimised to those blocks with defects; • Native handling of tiered storage and caching devices, which is usually a volume related task. Because it also understands the file system, it can use file-related knowledge to inform, integrate and optimize its tiered storage handling which a separate device cannot; • Native handling of snapshots and backup/ which can be made efficient by integrating the volume and file handling.

ZFS can routinely take snapshots several times an hour of the data system, efficiently and quickly. Mithoon All Songs Free Mp3 Download. (Relevant tools are provided at a low level and require external scripts and software for utilization). • Native and, although the latter is largely handled in and is memory hungry. • Efficient rebuilding of RAID arrays — a RAID controller often has to rebuild an entire disk, but ZFS can combine disk and file knowledge to limit any rebuilding to data which is actually missing or corrupt, greatly speeding up rebuilding; • Ability to identify data that would have been found in a cache but has been discarded recently instead; this allows ZFS to reassess its caching decisions in light of later use and facilitates very high cache hit levels; • Alternative caching strategies can be used for data that would otherwise cause delays in data handling.

For example, synchronous writes which are capable of slowing down the storage system can be converted to asynchronous writes by being written to a fast separate caching device, known as the SLOG (sometimes called the ZIL – ZFS Intent Log). • Highly tunable – many internal parameters can be configured for optimal functionality. • Can be used for clusters and computing, although not fully designed for this use. Inappropriately specified systems [ ] Unlike many file systems, ZFS is intended to work in a specific way and towards specific ends. It expects or is designed with the assumption of a specific kind of hardware environment. If the system is not suitable for ZFS, then ZFS may underperform significantly.

ZFS developers Calomel stated in their 2017 ZFS benchmarks that: 'On mailing lists and forums there are posts which state ZFS is slow and unresponsive. We have shown in the previous section you can get incredible speeds out of the file system if you understand the limitations of your hardware and how to properly setup your raid. We suspect that many of the objectors of ZFS have setup their ZFS system using slow or otherwise substandard I/O subsystems.' Common system design failures: • Inadequate RAM — ZFS may use a large amount of memory in many scenarios; • Inadequate disk free space — ZFS uses for data storage; its performance may suffer if the disk pool gets too close to full. Around 70% is a recommended limit for good performance. Above a certain percentage, typically set to around 80%, ZFS switches to a space-conserving rather than speed-oriented approach, and performance plumments as it focuses on preserving working space on the volume; • No efficient dedicated SLOG device, when synchronous writing is prominent — this is notably the case for and; even SSD based systems may need a separate SLOG device for expected performance. The SLOG device is only used for writing apart from when recovering from a system error.

It can often be small (for example, in, the SLOG device only needs to store the largest amount of data likely to be written in about 10 seconds (or the size of two 'transaction groups'), although it can be made larger to allow longer lifetime of the device). SLOG is therefore unusual in that its main criteria are pure write functionality, low latency, and loss protection – usually little else matters. • Lack of suitable caches, or misdesigned caches — for example, ZFS can cache read data in RAM ('ARC') or a separate device ('L2ARC'); in some cases adding extra ARC is needed, in other cases adding extra L2ARC is needed, and in some situations adding extra L2ARC can even degrade performance, by forcing RAM to be used for for the slower L2ARC, at the cost of less room for data in the ARC. • Use of hardware RAID cards, perhaps in the mistaken belief that these will 'help' ZFS. While routine for other filing systems, ZFS handles RAID natively, and is designed to work with a raw and unmodified view of storage devices, so it can fully use its functionality.

A separate RAID card may leave ZFS less efficient and reliable. For example, ZFS checksums all data, but most RAID cards will not do this as effectively, or for cached data. Separate cards can also mislead ZFS about the state of data, for example after a, or by mis-signalling exactly when data has safely been written, and in some cases this can lead to issues and data loss. Separate cards can also slow down the system, sometimes greatly, by adding to every data read/write operation, or by undertaking full rebuilds of damaged arrays where ZFS would have only needed to do minor repairs of a few seconds. • Use of poor quality components – Calomel identify poor quality RAID and network cards as common culprits for low performance. • Poor configuration/tuning – ZFS options allow for a wide range of tuning, and mis-tuning can affect performance.

For example, suitable memory caching parameters for file shares on are likely to be different from those required for block access shares using and. A memory cache that would be appropriate for the former, can cause errors and start-stop issues as data caches are flushed - because the time permitted for a response is likely to be much shorter on these kinds of connections, the client may believe the connection has failed, if there is a delay due to 'writing out' a large cache. Similarly, an inappropriately large in-memory write cache can cause 'freezing' (without timeouts) on file share protocols, even when the connection does not time out. See also: and One major feature that distinguishes ZFS from other is that it is designed with a focus on data integrity by protecting the user's data on disk against caused by, spikes, bugs in disk, phantom writes (the previous write did not make it to disk), misdirected reads/writes (the disk accesses the wrong block), DMA parity errors between the array and server memory or from the driver (since the checksum validates data inside the array), driver errors (data winds up in the wrong buffer inside the kernel), accidental overwrites (such as swapping to a live file system), etc. A 2012 research showed that neither any of the then-major and widespread filesystems (such as,,,, or ) nor hardware RAID (which has ) provided sufficient protection against data corruption problems. Initial research indicates that ZFS protects data better than earlier efforts.

It is also faster than UFS and can be seen as its replacement. ZFS data integrity [ ] For ZFS, data integrity is achieved by using a checksum or a hash throughout the file system tree. Each block of data is checksummed and the checksum value is then saved in the pointer to that block—rather than at the actual block itself. Next, the block pointer is checksummed, with the value being saved at its pointer. This checksumming continues all the way up the file system's data hierarchy to the root node, which is also checksummed, thus creating a.

In-flight data corruption or phantom reads/writes (the data written/read checksums correctly but is actually wrong) are undetectable by most filesystems as they store the checksum with the data. ZFS stores the checksum of each block in its parent block pointer so the entire pool self-validates. When a block is accessed, regardless of whether it is data or meta-data, its checksum is calculated and compared with the stored checksum value of what it 'should' be.

If the checksums match, the data are passed up the programming stack to the process that asked for it; if the values do not match, then ZFS can heal the data if the storage pool provides (such as with internal ), assuming that the copy of data is undamaged and with matching checksums. It is optionally possible to provide additional in-pool redundancy by specifying copies=2 (or copies=3 or more), which means that data will be stored twice (or three times) on the disk, effectively halving (or, for copies=3, reducing to one third) the storage capacity of the disk. Additionally some kinds of data used by ZFS to manage the pool are stored multiple times by default for safety, even with the default copies=1 setting. If other copies of the damaged data exist or can be reconstructed from checksums and data, ZFS will use a copy of the data (or recreate it via a RAID recovery mechanism), and recalculate the checksum—ideally resulting in the reproduction of the originally expected value.

If the data passes this integrity check, the system can then update all faulty copies with known-good data and redundancy will be restored. RAID [ ] ZFS and hardware RAID [ ] If the disks are connected to a RAID controller, it is most efficient to configure it as a in mode (i.e. Turn off RAID function). If a hardware RAID card is used, ZFS always detects all data corruption but cannot always repair data corruption because the hardware RAID card will interfere. Therefore, the recommendation is to not use a hardware RAID card, or to flash a hardware RAID card into JBOD/IT mode. For ZFS to be able to guarantee data integrity, it needs to either have access to a RAID set (so all data is copied to at least two disks), or if one single disk is used, ZFS needs to enable redundancy (copies) which duplicates the data on the same logical drive. Using ZFS copies is a good feature to use on notebooks and desktop computers, since the disks are large and it at least provides some limited redundancy with just a single drive.

There are several reasons as to why it is better to rely solely on ZFS by using several independent disks and or mirroring. When using hardware RAID, the controller usually adds controller-dependent data to the drives which prevents software RAID from accessing the user data. While it is possible to read the data with a compatible hardware RAID controller, this inconveniences consumers as a compatible controller usually isn't readily available. Using the JBOD/RAID-Z combination, any disk controller can be used to resume operation after a controller failure. Note that hardware RAID configured as JBOD may still detach drives that do not respond in time (as has been seen with many energy-efficient consumer-grade hard drives), and as such, may require /CCTL/ERC-enabled drives to prevent drive dropouts. Software RAID using ZFS [ ] ZFS offers software RAID through its RAID-Z and mirroring organization schemes. RAID-Z is a data/parity distribution scheme like, but uses dynamic stripe width: every block is its own RAID stripe, regardless of blocksize, resulting in every RAID-Z write being a full-stripe write.

This, when combined with the copy-on-write transactional semantics of ZFS, eliminates the. RAID-Z is also faster than traditional RAID 5 because it does not need to perform the usual sequence. As all stripes are of different sizes, RAID-Z reconstruction has to traverse the filesystem metadata to determine the actual RAID-Z geometry.

This would be impossible if the filesystem and the RAID array were separate products, whereas it becomes feasible when there is an integrated view of the logical and physical structure of the data. Going through the metadata means that ZFS can validate every block against its 256-bit checksum as it goes, whereas traditional RAID products usually cannot do this. In addition to handling whole-disk failures, RAID-Z can also detect and correct, offering 'self-healing data': when reading a RAID-Z block, ZFS compares it against its checksum, and if the data disks did not return the right answer, ZFS reads the parity and then figures out which disk returned bad data. Then, it repairs the damaged data and returns good data to the requestor.

RAID-Z does not require any special hardware: it does not need NVRAM for reliability, and it does not need write buffering for good performance. With RAID-Z, ZFS provides fast, reliable storage using cheap, commodity disks. There are three different RAID-Z modes: RAID-Z1 (similar to RAID 5, allows one disk to fail), RAID-Z2 (similar to RAID 6, allows two disks to fail), and RAID-Z3 (Also referred to as RAID 7 allows three disks to fail). The need for RAID-Z3 arose recently because RAID configurations with future disks (say, 6–10 TB) may take a long time to repair, the worst case being weeks. During those weeks, the rest of the disks in the RAID are stressed more because of the additional intensive repair process and might subsequently fail, too. By using RAID-Z3, the risk involved with disk replacement is reduced. Mirroring, the other ZFS RAID option, is essentially the same as RAID 1, allowing any number of disks to be mirrored.

Like RAID 1 it also allows faster read and resilver/rebuild speeds since all drives can be used simultaneously and data is not calculated separately, and mirrored vdevs can be split to create identical copies of the pool. Resilvering and scrub [ ] ZFS has no tool equivalent to (the standard Unix and Linux data checking and repair tool for file systems).

Instead, ZFS has a built-in function which regularly examines all data and repairs silent corruption and other problems. Some differences are: • fsck must be run on an offline filesystem, which means the filesystem must be unmounted and is not usable while being repaired, while scrub is designed to be used on a mounted, live filesystem, and does not need the ZFS filesystem to be taken offline. • fsck usually only checks metadata (such as the journal log) but never checks the data itself. This means, after an fsck, the data might still not match the original data as stored.

• fsck cannot always validate and repair data when checksums are stored with data (often the case in many file systems), because the checksums may also be corrupted or unreadable. ZFS always stores checksums separately from the data they verify, improving reliability and the ability of scrub to repair the volume. ZFS also stores multiple copies of data – metadata in particular may have upwards of 4 or 6 copies (multiple copies per disk and multiple disk mirrors per volume), greatly improving the ability of scrub to detect and repair extensive damage to the volume, compared to fsck. • scrub checks everything, including metadata and the data.

The effect can be observed by comparing fsck to scrub times – sometimes a fsck on a large RAID completes in a few minutes, which means only the metadata was checked. Traversing all metadata and data on a large RAID takes many hours, which is exactly what scrub does. The official recommendation from Sun/Oracle is to scrub enterprise-level disks once a month, and cheaper commodity disks once a week. Capacity [ ] ZFS is a file system, so it can address 1.84 × 10 19 times more data than 64-bit systems such as. The maximum limits of ZFS are designed to be so large that they should never be encountered in practice. For instance, fully populating a single zpool with 2 128 bits of data would require 10 24 3 TB hard disk drives. Some theoretical limits in ZFS are: • 2 48: number of entries in any individual directory • 16 (2 64 bytes): maximum size of a single file • 16 exbibytes: maximum size of any attribute • 256 quadrillion (2 128 bytes): maximum size of any zpool • 2 56: number of attributes of a file (actually constrained to 2 48 for the number of files in a directory) • 2 64: number of devices in any zpool • 2 64: number of zpools in a system • 2 64: number of file systems in a zpool Encryption [ ] With Oracle Solaris, the encryption capability in ZFS is embedded into the I/O pipeline.

During writes, a block may be compressed, encrypted, checksummed and then deduplicated, in that order. The policy for encryption is set at the dataset level when datasets (file systems or ZVOLs) are created. The wrapping keys provided by the user/administrator can be changed at any time without taking the file system offline.

The default behaviour is for the wrapping key to be inherited by any child data sets. The data encryption keys are randomly generated at dataset creation time. Only descendant datasets (snapshots and clones) share data encryption keys. A command to switch to a new data encryption key for the clone or at any time is provided—this does not re-encrypt already existing data, instead utilising an encrypted master-key mechanism. Other features [ ] Storage devices, spares, and quotas [ ] Pools can have hot spares to compensate for failing disks. When mirroring, block devices can be grouped according to physical chassis, so that the filesystem can continue in the case of the failure of an entire chassis. Storage pool composition is not limited to similar devices, but can consist of ad-hoc, heterogeneous collections of devices, which ZFS seamlessly pools together, subsequently doling out space to diverse filesystems [ ] as needed.

Arbitrary storage device types can be added to existing pools to expand their size. The storage capacity of all vdevs is available to all of the file system instances in the zpool. A can be set to limit the amount of space a file system instance can occupy, and a can be set to guarantee that space will be available to a file system instance. Caching mechanisms: ARC (L1), L2ARC, Transaction groups, SLOG (ZIL) [ ] ZFS uses different layers of disk cache to speed up read and write operations. Ideally, all data should be stored in RAM, but that is usually too expensive. Therefore, data is automatically cached in a hierarchy to optimize performance versus cost; these are often called 'hybrid storage pools'. Frequently accessed data will be stored in RAM, and less frequently accessed data can be stored on slower media, such as (SSDs).

Data that is not often accessed is not cached and left on the slow hard drives. If old data is suddenly read a lot, ZFS will automatically move it to SSDs or to RAM. ZFS caching mechanisms include one each for reads and writes, and in each case, two levels of caching can exist, one in computer memory (RAM) and one on fast storage (usually (SSDs)), for a total of four caches.

Where stored Read cache Write cache First level cache In RAM Known as ARC, due to its use of a variant of the (ARC) algorithm. RAM will always be used for caching, thus this level is always present.

The efficiency of the ARC means that disks will often not need to be accessed, provided the ARC size is sufficiently large. If RAM is too small there will hardly be any ARC at all; in this case, ZFS always needs to access the underlying disks which impacts performance considerably.

Handled by means of 'transaction groups' – writes are collated over a short period (typically 5 – 30 seconds) up to a given limit, with each group being written to disk ideally while the next group is being collated. This allows writes to be organized more efficiently for the underlying disks at the risk of minor data loss of the most recent transactions upon power interruption or hardware fault. In practice the power loss risk is avoided by ZFS write and by the SLOG/ZIL second tier write cache pool (see below), so writes will only be lost if a write failure happens at the same time as a total loss of the second tier SLOG pool, and then only when settings related to synchronous writing and SLOG use are set in a way that would allow such a situation to arise. If data is received faster than it can be written, data receipt is paused until the disks can catch up. Second level cache On fast storage devices (which can be added or removed from a 'live' system without disruption in current versions of ZFS, although not always in older versions) Known as L2ARC ('Level 2 ARC'), optional. ZFS will cache as much data in L2ARC as it can, which can be tens or hundreds of in many cases. L2ARC will also considerably speed up if the entire deduplication table can be cached in L2ARC.

It can take several hours to fully populate the L2ARC from empty (before ZFS has decided which data are 'hot' and should be cached). If the L2ARC device is lost, all reads will go out to the disks which slows down performance, but nothing else will happen (no data will be lost). Known as SLOG or ZIL ('ZFS Intent Log'), optional but an SLOG will be created on the main storage devices if no cache device is provided. This is the second tier write cache, and is often misunderstood. Strictly speaking, ZFS does not use the SLOG device to cache its disk writes. Rather, it uses SLOG to ensure writes are captured to a permanent storage medium as quickly as possible, so that in the event of power loss or write failure, no data which was acknowledged as written, will be lost. The SLOG device allows ZFS to speedily store writes and quickly report them as written, even for storage devices such as that are much slower.

In the normal course of activity, the SLOG is never referred to or read, and it does not act as a cache; its purpose is to safeguard during the few seconds taken for collation and 'writing out', in case the eventual write were to fail. If all goes well, then the storage pool will be updated at some point within the next 5 to 60 seconds, when the current transaction group is written out to disk (see above), at which point the saved writes on the SLOG will simply be ignored and overwritten. If the write eventually fails, or the system suffers a crash or fault preventing its writing, then ZFS can identify all the writes that it has confirmed were written, by reading back the SLOG (the only time it is read from), and use this to completely repair the data loss. This becomes crucial if a large number of synchronous writes take place (such as with, and some ), where the client requires confirmation of successful writing before continuing its activity; the SLOG allows ZFS to confirm writing is successful much more quickly than if it had to write to the main store every time, without the risk involved in misleading the client as to the state of data storage. If there is no SLOG device then part of the main data pool will be used for the same purpose, although this is slower. If the log device itself is lost, it is possible to lose the latest writes, therefore the log device should be mirrored. In earlier versions of ZFS, loss of the log device could result in loss of the entire zpool, although this is no longer the case.

Therefore, one should upgrade ZFS if planning to use a separate log device. Copy-on-write transactional model [ ] ZFS uses a.

All block pointers within the filesystem contain a 256-bit or 256-bit (currently a choice between,, or ) of the target block, which is verified when the block is read. Blocks containing active data are never overwritten in place; instead, a new block is allocated, modified data is written to it, then any blocks referencing it are similarly read, reallocated, and written. To reduce the overhead of this process, multiple updates are grouped into transaction groups, and ZIL () write cache is used when synchronous write semantics are required. The blocks are arranged in a tree, as are their checksums (see ). Snapshots and clones [ ]. This section does not any. Unsourced material may be challenged and.

(January 2017) () An advantage of copy-on-write is that, when ZFS writes new data, the blocks containing the old data can be retained, allowing a version of the file system to be maintained. ZFS snapshots are consistent (they reflect the entire data as it existed at a single point in time), and can be created extremely quickly, since all the data composing the snapshot is already stored, with the entire storage pool often snapshotted several times per hour. They are also space efficient, since any unchanged data is shared among the file system and its snapshots. Snapshots are inherently read-only, ensuring they will not be modified after creation, although they should not be relied on as a sole means of backup. Entire snapshots can be restored and also files and directories within snapshots.

Writeable snapshots ('clones') can also be created, resulting in two independent file systems that share a set of blocks. As changes are made to any of the clone file systems, new data blocks are created to reflect those changes, but any unchanged blocks continue to be shared, no matter how many clones exist. This is an implementation of the principle. Sending and receiving snapshots [ ]. This section does not any. Unsourced material may be challenged and.

(January 2017) () ZFS file systems can be moved to other pools, also on remote hosts over the network, as the send command creates a stream representation of the file system's state. This stream can either describe complete contents of the file system at a given snapshot, or it can be a delta between snapshots.

Computing the delta stream is very efficient, and its size depends on the number of blocks changed between the snapshots. This provides an efficient strategy, e.g., for synchronizing offsite backups or high availability mirrors of a pool. Dynamic striping [ ] Dynamic across all devices to maximize throughput means that as additional devices are added to the zpool, the stripe width automatically expands to include them; thus, all disks in a pool are used, which balances the write load across them. [ ] Variable block sizes [ ] ZFS uses variable-sized blocks, with 128 KB as the default size. Available features allow the administrator to tune the maximum block size which is used, as certain workloads do not perform well with large blocks. If is enabled, variable block sizes are used. If a block can be compressed to fit into a smaller block size, the smaller size is used on the disk to use less storage and improve IO throughput (though at the cost of increased CPU use for the compression and decompression operations).

Lightweight filesystem creation [ ] In ZFS, filesystem manipulation within a storage pool is easier than volume manipulation within a traditional filesystem; the time and effort required to create or expand a ZFS filesystem is closer to that of making a new directory than it is to volume manipulation in some other systems. [ ] Adaptive endianness [ ]. This section does not any. Unsourced material may be challenged and. (January 2017) (), a desktop operating system derived from FreeBSD, supports ZFS storage pool version 6 as of 0.3-RELEASE. This was derived from code included in 7.0-RELEASE. An update to storage pool 28 is in progress in 0.4-CURRENT and based on 9-STABLE sources around FreeBSD 9.1-RELEASE code.

TrueOS [ ] (formerly known as PC-BSD) is a desktop-oriented distribution of FreeBSD, which inherits its ZFS support. [ ] FreeNAS [ ], an embedded open source (NAS) distribution based on, has the same ZFS support as FreeBSD and.

[ ] ZFS Guru [ ], an embedded open source (NAS) distribution based on. PfSense and PCBSD [ ], an open source BSD based, and, a BSD based desktop, both support ZFS (pfSense in its upcoming 2.4 release). NAS4Free [ ], an embedded open source (NAS) distribution based on, has the same ZFS support as FreeBSD, ZFS storage pool version 5000. This project is a continuation of FreeNAS 7 series project.

Debian GNU/kFreeBSD [ ] Being based on the FreeBSD kernel, has ZFS support from the kernel. However, additional userland tools are required, while it is possible to have ZFS as root or /boot file system in which case required configuration is performed by the Debian installer since the Wheezy release. As of 31 January 2013, the ZPool version available is 14 for the Squeeze release, and 28 for the Wheezy-9 release.

This section may require to meet Wikipedia's. The specific problem is: wording and style issues. (July 2016) () Although the ZFS filesystem supports -based operating systems, difficulties arise for maintainers wishing to provide native support for ZFS in their products due to between the license used by the ZFS code, and the license used by the Linux kernel. To enable ZFS support within Linux, a containing the CDDL-licensed ZFS code must be compiled and loaded into the kernel. According to the, the wording of the GPL license legally prohibits redistribution of the resulting product as a, though this viewpoint has caused some controversy. ZFS on FUSE [ ] One potential workaround to licensing incompatibility was trialed in 2006, with an experimental port of the ZFS code to Linux's system. The ran entirely in instead of being integrated into the Linux kernel, and was therefore not considered a derivative work of the kernel.

This approach was functional, but suffered from significant performance penalties when compared with integrating the filesystem as a native kernel module running in. As of 2016, the ZFS on FUSE project appears to be defunct.

Native ZFS on Linux [ ] A native port of ZFS for Linux produced by the (LLNL) was released in March 2013, following these key events: • 2008: prototype to determine viability • 2009: initial ZVOL and Lustre support • 2010: development moved to • 2011: layer added • 2011: community of early adopters • 2012: production usage of ZFS • 2013: stable release As of August 2014, ZFS on Linux uses the pool version number 5000, which indicates that the features it supports are defined via. This pool version is an unchanging number that is expected to never conflict with version numbers given by Oracle.

KQ InfoTech [ ] Another native port for Linux was developed by KQ InfoTech in 2010. This port used the zvol implementation from the Lawrence Livermore National Laboratory as a starting point.

A release supporting zpool v28 was announced in January 2011. In April 2011, KQ Infotech was acquired by, and their work on ZFS ceased. Source code of this port can be found on. The work of KQ InfoTech was ultimately integrated into the LLNL's native port of ZFS for Linux. Source code distribution [ ] While the license incompatibility may arise with the distribution of compiled binaries containing ZFS code, it is generally agreed that distribution of the source code itself is not affected by this. In, configuring a ZFS root filesystem is well documented and the required packages can be installed from its package repository. Also provides documentation on supporting ZFS, both as a kernel module and when built into the kernel.

Ubuntu integration [ ] The question of the CDDL license's compatibility with the GPL license resurfaced in 2015, when the Linux distribution announced that it intended to make precompiled OpenZFS binary kernel modules available to end-users directly from the distribution's official package repositories. In 2016, Ubuntu announced that a legal review resulted in the conclusion that providing support for ZFS via a binary was not in violation of the provisions of the GPL license. Others followed Ubuntu's conclusion, while the FSF and SFC reiterated their opposing view. 16.04 LTS ('Xenial Xerus'), released on April 21, 2016, allows the user to install the OpenZFS binary packages directly from the Ubuntu software repositories. As of April 2017, no legal challenge has been brought against regarding the distribution of these packages. Microsoft Windows [ ] A port of open source ZFS was attempted in 2010 but after a hiatus of over one year development ceased in 2012.

In October 2017 a new port of OpenZFS was announced at OpenZFS Developer Summit. This section needs expansion. You can help. (December 2013) • 2008: Sun shipped a line of ZFS-based 7000-series storage appliances. • 2013: Oracle shipped ZS3 series of ZFS-based filers and seized first place in the benchmark with one of them. • 2013: ships ZFS-based NAS devices called for and for the enterprise. • 2014: ships a line of ZFS-based NAS devices called, designed to be used in the enterprise.

• 2015: announces a platform that allows customers to provision their own zpool and import and export data using zfs send and zfs receive. Detailed release history [ ] With ZFS in Oracle Solaris: as new features are introduced, the version numbers of the pool and file system are incremented to designate the format and features available. Features that are available in specific file system versions require a specific pool version. Distributed development of OpenZFS involves and pool version 5000, an unchanging number that is expected to never conflict with version numbers given by Oracle. Legacy version numbers still exist for pool versions 1–28, implied by the version 5000. Illumos uses pool version 5000 for this purpose. Future on-disk format changes are enabled / disabled independently via feature flags.

Legend: Old release Latest stable release Latest Proprietary stable release Latest Proprietary beta release ZFS Filesystem Version Number Release date Significant changes 1 OpenSolaris Nevada build 36 First release 2 OpenSolaris Nevada b69 Enhanced directory entries. In particular, directory entries now store the object type. For example, file, directory, named pipe, and so on, in addition to the object number. 3 OpenSolaris Nevada b77 Support for sharing ZFS file systems over.

Case insensitivity support. System attribute support. Integrated anti-virus support. This section may contain an excessive amount of that may only interest a specific audience. Please help by or any relevant information, and removing excessive detail that may be against. (December 2013) () The first indication of 's interest in ZFS was an April 2006 post on the opensolaris.org zfs-discuss mailing list where an Apple employee mentioned being interested in porting ZFS to their operating system.

In the release version of Mac OS X 10.5, ZFS was available in read-only mode from the command line, which lacks the possibility to create zpools or write to them. Before the 10.5 release, Apple released the 'ZFS Beta Seed v1.1', which allowed read-write access and the creation of zpools,; however, the installer for the 'ZFS Beta Seed v1.1' has been reported to only work on version 10.5.0, and has not been updated for version 10.5.1 and above.

In August 2007, Apple opened a ZFS project on their Mac OS Forge web site. On that site, Apple provided the source code and binaries of their port of ZFS which includes read-write access, but there was no installer available until a third-party developer created one. In October 2009, Apple announced a shutdown of the ZFS project on Mac OS Forge. That is to say that their own hosting and involvement in ZFS was summarily discontinued.

No explanation was given, just the following statement: 'The ZFS project has been discontinued. The mailing list and repository will also be removed shortly.' Apple would eventually release the legally required, CDDL-derived, portion of the source code of their final public beta of ZFS, code named '10a286'. Complete ZFS support was once advertised as a feature of Snow Leopard Server ( 10.6). However, by the time the operating system was released, all references to this feature had been silently removed from its features page. Apple has not commented regarding the omission. Apple's '10a286' source code release, and versions of the previously released source and binaries, have been preserved and new development has been adopted by a group of enthusiasts.

The MacZFS project acted quickly to mirror the public archives of Apple's project before the materials would have disappeared from the internet, and then to resume its development elsewhere. The MacZFS community has curated and matured the project, supporting ZFS for all Mac OS releases since 10.5. The project has an active. As of July 2012, MacZFS implements zpool version 8 and ZFS version 2, from the October 2008 release of. Additional historical information and commentary can be found on the MacZFS web site and FAQ. The 17th September 2013 launch of OpenZFS included ZFS-OSX, which will become a new version of MacZFS, as the distribution for Darwin. See also [ ].

I decided to document the process of configuring a Solaris 10 server or workstation over the course of the many times I’ve done it, and this document has become my standard HOWTO for the task. A significant amount of inspiration for this page stemmed from a wonderful guide written by the CTO at Rutgers University,, on. His guide was one of my starting points for configuring Solaris 10 when I first started collecting Suns and is a very good resource in general. However, this guide diverges from it in detail and preference.

My server setup is a bit different from the one outlined by Dr. Hedrick, and I chose to start documenting the installation from an earlier stage due to the fact that I always find myself having to install Solaris myself on secondhand machines. These notes were taken during the various reinstalls of chrysalis, which I installed over a private network using another Solaris 10-based install server ( herring). I’ll start with the Solaris 10 (u9 in this case) DVD downloaded to herring and chrysalis having a pair of blank disks. Outline • • • • • • • • • • • • • • • • • • Setting up the install server On the machine that will become the install server, the Solaris Express iso has to be mounted, and the install server must be set up. As a bit of a side note, I’ve heard that JumpStart Enterprise Toolkit (JET) is the preferred way of doing network installations for many serious systems people, but I chose not to use it in this case for simplicity. Download Game Balap Mobil Untuk Laptop Windows 8.

Anyway, # lofiadm -a /home/glock/sol-10-u9-ga-sparc-dvd.iso /dev/lofi/1 # mount -F hsfs /dev/lofi/1 /mnt # mkdir -p /export/install Sun provides a very handy and simple program to set up the install server. It takes a while though, as the contents of the iso have to be copied to disk. Grab a cup of coffee for this one.

# cd /mnt/Solaris_10/Tools #./setup_install_server /export/install/media Verifying target directory. Calculating the required disk space for the Solaris_10 product Calculating space required for the installation boot image Copying the CD image to disk. Copying Install Boot Image hierarchy.

Copying /boot netboot hierarchy. Install Server setup complete Of course, this install target has to be accessible via NFS, and NFS must be enabled. Also note the Solaris 10-specific svcadm: # share -o ro,anon=0 /export/install/media # svcadm enable network/nfs/server Setting up the install client can be trickier because of how the networking may be between your install server and the target client. Both of my machines were on the same subnet, so I did not have to specify much witchcraft. # cd /export/install/media/Solaris_10/Tools #./add_install_client -e 0:3:ba:6c:ad:28 -s 192.168.2.91:/export/install/media chrysalis sun4u In the above case, 0:3:ba:6c:ad:28 is the install target’s ( chrysalis’s) MAC and 192.168.2.91 is the install server’s ( herring’s) IP address. Specifying the IP is important, as I would not rely on the installation being able to resolve hostnames when it needs to mount the NFS share.

The add_install_client command automatically adds the specified MAC to /etc/ethers, but it has appeared to be additionally important to add the client hostname to /etc/hosts so that the install client can be identified by the install server. Initiating the install On the client machine, access the console via serial, LOM, or keyboard and mouse and get to the OpenBoot prompt. Drop to the ok prompt, cross your fingers, then attempt to boot over the network. Ok>boot net If the install client won’t boot off the network, first ensure that ipfilter (or whatever firewall you are using on the install server) is not blocking tftpd and NFS. NFS is particularly difficult to pass through a firewall, so I usually toss a rule in to allow all traffic from the install client (both TCP and UDP for NFS) through. Alternatively, you can just svcadm disable network/ipfilter temporarily on the install server; just don’t forget to turn it back on once the installation is complete! There shouldn’t be anything tricky about the install; I generally opt to use ZFS and mirror two disks for redundancy.

I’ve encountered two problems with this that may be worth mentioning. • I had a problem with disks not appearing in the installer despite being recognized and mountable from a shell opened during the install process. This wound up being because the installer only displays disks with SMI labels as install targets. One of my disks was recycled from another zpool, and as a result it had an EFI label. To rectify this, I had to format -e to get into expert mode, then issue label and choose the SMI label over the EFI. • For whatever reason, if you want to install with ZFS as your root partition’s filesystem, you MUST use the text-mode installer.

Since chrysalis was a net install, this was not an issue; however, if you are installing via keyboard/mouse/monitor, you’ve got to issue boot cdrom - text to force the text mode install (the X installer is still loaded though), or just unplug the mouse and boot cdrom to prevent X from loading at all. After the system was completely installed, I configured the system to get its IP via DHCP as well, but it appears that the installer does not completely configure the system properly for DHCP. Thus, the first thing I had to do after the installation completed was to # touch /etc/dhcp.dmfe0 # init 6 The presence of this file will instruct the system to automatically launch the DHCP client at boot and configure dmfe0 with it on system boot. Switching from serial to SSH I prefer to ditch using the serial console as soon as I can because of how slow it is and vi’s slight incompatibilities with it. The alternative to this serial console at this point is to use SSH, but there are no user accounts on the new system yet. Thus, I have to be able to ssh into the new machine as root. For security reasons, Solaris disables this by default though, so what I do is enable ipfilter to cover for ssh while login-as-root is enabled.

The first step is to establish a very simple ipfilter ruleset while still logged in through the serial line. Edit /etc/ipf/ipf.conf and add these rules: pass out quick from any to any keep state pass in quick proto tcp from 192.168.2.2/32 to any port = 22 keep state keep frags block in quick all This will block everything incoming except ssh traffic coming from 192.168.2.2, which is the address of my workstation from which I will ssh. Now enable ipfilter by issuing # svcadm enable network/ipfilter and confirm that ipfilter is working correctly by then issuing # svcs -a grep ipfilter Then to allow the root user to log in over SSH, edit /etc/ssh/sshd_config and replace the line which reads PermitRootLogin no to read PermitRootLogin yes and reload this configuration file so that root can SSH in by issuing svcadm refresh ssh. Now the serial console can be abandoned and the rest of this configuration procedure can be carried out over SSH. Of course, this is optional, and you can often stick to the serial console just as easily if you would like. Anyway, I ssh root@chrysalis from my workstation and I’m back to where I was but with a better terminal.

Using stronger cryptography A high priority for me is to change the default cryptographic algorithm Solaris uses. Although this probably does not affect enterprise deployments which use LDAP or Kerberos, Solaris’s choice to use the standard unix algorithm means all user passwords (including root!) are strictly limited to only eight characters. Eight-character passwords are too short for my preference, so I edit /etc/security/policy.conf and change CRYPT_DEFAULT=__unix__ to CRYPT_DEFAULT=md5 For reference, Identifier Algorithm Max Pass Length Compatibility man page __unix__ Unix 8 All crypt_unix(5) 1 BSD MD5 255 BSD, Linux crypt_bsdmd5(5) 2a Blowfish 255 BSD crypt_bsdbf(5) md5 Sun MD5 255 - crypt_sunmd5(5) 5 SHA-256 255 - crypt_sha256(5) 6 SHA-512 255 - crypt_sha512(5) The man pages for these algorithms are also very informative, and I chose to use Sun’s MD5 implementation simply because man crypt_unix suggests it. After changing this, existing passwords need to be rehashed using the new algorithm. Since only the root account exists right now (this is why I do this before making new user accounts!), this just means issuing # passwd root Adding new users Many first-time Solaris users find it confusing that new user home directories cannot be made in /home. This is due to autofs which is enabled by default in Solaris.

The premise is that new home directories can physically be scattered all over (i.e., in /export/home, on separate expansion volumes, on other machines on the network, et cetera) but all be mounted to the unified /home directory. Disabling autofs (like in other operating systems) or keeping it enabled (like in Solaris) are both options. No autofs To simply disable autofs, issue # svcadm disable autofs and the /home directory should be released and modifiable. In the case of ZFS root, it may be a good idea to make /home its own ZFS dataset for ease of management: # rm -r /home # zfs destroy rpool/export/home # zfs create -o mountpoint=/home rpool/home Of course, ZFS datasets occupy kernel memory and this may not be adviseable on low-memory systems. With autofs If the native autofs setup for Solaris is desired, setting it up is pretty easy. The default /etc/auto_master should contain the necessary lines already out of the box: +auto_master /net -hosts -nosuid,nobrowse /home auto_home -nobrowse The +auto_master defers to NIS maps if one exists; it can be removed if this server will not be using NIS.

The /home line is the one to keep, as it places the /home directory under autofs control and defers configuration options to the /etc/auto_home file. That file is also pretty simple, and needs the following line added: * chrysalis:/export/home/& The first column (a *) indicates that any directory under the /home directory, when queried, should be mapped under autofs. The second column (which is blank in this case) would be where the NFS flags (e.g., -intr,nosuid,hard) would be, and the third column is the device to mount. The & symbol corresponds to the * in the first line and essentially gets replaced with whatever value that * takes. For example, you could more explicitly rewrite the above line as a bunch of lines, each for an individual user: glock chrysalis:/export/home/glock frank chrysalis:/export/home/frank mary chrysalis:/export/home/mary Using the wildcard spares you the hassle of having to edit this file every time a new user is added; however, this also puts the entire /home/* under the control of this server. If you want the ability to mount users whose home directories are on other devices or machines all into the same /home directory, you would (to the best of my knowledge) have to add each user individually, so at the very least your auto_home would look like: glock chrysalis:/export/home/& frank chrysalis:/export/home/& mary chrysalis:/export/home/& Anyway, configuring autofs and /etc/auto_home correctly tells the automounter to NFS mount /export/home/whoever on the server chrysalis to the local /home/whoever directory whenever it is accessed.

Once the autofs config files are as they should be, reloading autofs lets us assign users’ home directories to, say, /home/glock rather than /export/home/glock: # svcadm restart autofs Although this refresh isn’t strictly necessary, it doesn’t hurt to make sure you haven’t entered any syntax errors by ensuring that autofs will start up correctly. As a side note, if you are using autofs to mount homes from another Solaris host, don’t forget to share the home directories with this new machine! The way of doing this in ZFS is something like # zfs set sharenfs=rw=@128.6.18.165/26:@192.168.2.0/24 rpool/export Or if you want to do it the old-fashioned way, edit /etc/dfs/dfstab and add a similar line. For mounting local volumes via autofs though, enabling NFS isn’t necessary. Adding a non-root user Now to add the first user: # zfs create rpool/home/glock # useradd -d /home/glock -s /bin/bash -P 'Primary Administrator' glock # passwd glock. # chown -R glock:other /home/glock The above procedure can probably be wrapped into a script (a la adduser in Linux) for ease of use.

In recent versions of Solaris 10 (update 8 or 9), I’ve found that the “Primary Administrator” profile doesn’t always exist. I’m not sure if this is due to a missing package or what, but adding the profile manually isn’t very hard.

First edit /etc/security/exec_attr and add this line: Primary Administrator:suser:cmd:::*:uid=0;gid=0 Then add this line to /etc/security/prof_attr: Primary Administrator:::Can perform all administrative tasks:auths=solaris.*,solaris.grant;help=RtPriAdmin.html For completeness, install the RtPriAdmin.html file (I used to link this file here, but no longer have it) specified in /usr/lib/help/profiles/locale/C. Then either add users as specified above, or add this profile to existing users with usermod(1M): # usermod -P 'Primary Administrator' glock Now that a non-root user exists, root logins over ssh can be disabled again. In /etc/ssh/sshd_config, change the PermitRootLogin parameter back to no and reload the sshd configuration using svcadm refresh ssh.

At this point I logged out and logged back in under my newly created user account, then did pfexec su - to get back to where I was. Configuring some core system services Now that SSH has been locked up again, it’s time to configure a proper ruleset for IPFilter, set up system logging, and get everything up and running. Configuring IPfilter There are a lot of good guides on general network security and configuring ipfilter, the firewall provided in Solaris 10, so I will not get into the gory details here.

However, there are a few important things to note which are unique to Solaris: • If you didn’t select the “ minimal network daemons” group during installation, a bunch of daemons (notably telnet) will be running by default. To disable most of the unnecessary ones post-install, issue /usr/sbin/netservices limited as root. This will have the same effect as if you chose the minimal network daemons install option. • To quickly see what is still running, issue netstat -an grep LISTEN • Cross-referencing the open ports with /etc/services should give you a quick idea of what you’ve got running. If there are some mysterious services listening to ports of questionable necessity, you can usually find a more human-readable description of SMF services by issuing svcs -o FMRI,DESC Knowing this, you can craft a reasonably effective ruleset such as this one in /etc/ipf/ipf.conf.

# # ipf.conf # # IP Filter rules to be loaded during startup # # See ipf(4) manpage for more information on # IP Filter rules syntax. As of December 2010, I’ve been having trouble using the command-line setup for sconadm. According to Oracle, the following process should work with a valid, supported Sun hardware serial number and an Oracle Single Sign-On. Even with my entitlement to download patches through, I could not get sconadm to register correctly. Strangely enough, my old Sun contract number and SunSolve sign-on still work when registering through the updatemanager Java-based GUI, so I don’t know what’s supposed to work here. If you’ve got a valid Sun support contract and a production machine is being configured here, it’d probably be a good idea to register the system for easy patching.

Although this method may change in the months or years following the Oracle acquisition, as of August 2010, establishing patch authentication involves first creating a temporary configuration file (like /root/regprof) and putting the following text in it: userName=jim@company.com password=somepassword hostName=chrysalis subscriptionKey=ABC123 portalEnabled=false Of course, you’ve got to already have a valid SunSolve account (jim@company.com) and its password (somepassword), and you have to have a Sun support contract number (ABC123). Since this file contains a plaintext password and contract number, the sconadm command requires that it have strict permissions before it’ll accept it. Thus, to register the system for updates, do # chmod 400 /root/regprof # sconadm register -a -r regprof sconadm is running Authenticating user. Finish registration!

# rm /root/regprof At this point you can issue # smpatch analyze You have new messages. To retrieve: smpatch messages [-a] 121118-17 SunOS 5.10: Update Connection System Client 1.0.17 125555-07 SunOS 5.10: patch behavior patch 141588-04 SunOS 5.10: ksh,sh,pfksh,rksh,xargs patch 119254-75 SunOS 5.10: Install and Patch Utilities Patch 119788-10 SunOS 5.10: Sun Update Connection Proxy 1.0.9. At some point you should then actually download and apply these patches using smpatch update, but this process can take a very long time and I tend to patch overnight. Setting up system software Solaris now comes with a large wealth of software, but it lacks a good compiler out of the box.

GCC comes on the DVD now, but I prefer to keep stock GCC off of my system and support only Sun Studio’s compilers. Should I ever come across code that requires GCC to compile correctly (e.g., the wealth of GNU garbage that, despite claiming portability, only compiles nicely on Linux+GCC), I install. Anyway, at the time of writing, was the most recent version of Sun Studio (or Oracle Solaris Studio now), so I opted to download the package installer version for Solaris SPARC. Unfortunately it relies on a GUI installer which is a bit silly considering most servers run headless. It also requires a huge amount of RAM to install since it decompresses to /tmp, so I had to specify a few extra parameters during installation. Provided I downloaded the installer to /root # mkdir /root/tmp # bunzip2 SolarisStudio12.2-solaris-sparc-pkg-ML.tar.bz2 # tar -xvf SolarisStudio12.2-solaris-sparc-pkg-ML.tar # cd SolarisStudio12.2-solaris-sparc-pkg-ML #./SolarisStudio12.2-solaris-sparc-pkg-ML.sh --non-interactive --create-symlinks --tempdir /root/tmp The –non-interactive flag skips the GUI (and in fact all user input) and just does what it needs to do to install, including adding the optional language packs. It is worth mentioning that, in the x86 version of Solaris 10 9⁄ 10 and Solaris Studio 12.2, I had additional issues where the installer would abort, saying I needed to install patch 119961-07, but using the included./install_patches.sh would abort because, as it turns out, I didn’t have the SUNWsprot package installed (because it is not included in the End-User Distribution install).

Upon installing SUNWsprot and trying to then install 119961-07, it gave me another error about not being able to find the check-install script. A cursory glance made it appear that the install-patches.sh script included with Solaris Studio 12.2 is broken; I simply got fed up and installed 119961-07 by hand. In addition to SUNWsprot being required pre-install (on x86 at least), there are a few important packages that may or may not be necessary to install after the system is up to establish a suitable build environment.

These are a few that I’ve found myself needing: • SUNWhea - Headers necessary to compile anything • SUNWtoo (and SUNWtoox in Solaris 9) • SUNWarc (and SUNWarcx in Solaris 9) • SUNWbtool (and SUNWBtoox in Solaris 9) - Includes the non-GNU version of make • SUNWsprot (and SUNWsprox in Solaris 9) - Otherwise you get errors about libmakestate.so.1 not being found SUNWlibm and SUNWlibmr were missing in a Solaris 10 x86 install I did, which produced compile errors like '/opt/solstudio12.2/prod/include/CC/Cstd/rw/math.h', line 60: Error: Could not open include file. I’m not sure why I’ve never had that problem in the SPARC installs I’ve done. Also, In my notes I also have SUNWastdev (new to Solaris 10) but I don’t recall why I needed it. Also, installing Sun Studio from a core install requires the SUNWadm* packages. User PATHs Solaris has a lot of toolchains included with it due to its long history as a standards-compliant POSIX and UNIX OS. Because of this, I suspect that it leaves the task of deciding appropriate PATHs to the user and provides a very minimal PATH by default. To address this issue and provide users with a more useful default PATH, I use a file I create called /etc/defaultpath which contains the following: Then I add the following line to /etc/profile:.

/etc/defaultpath right above the export PATH. This gives all Bourne shell users a pretty useful path as soon as they log in.

A description of the various paths and the toolchains within them can be found via man -s5 filesystem. User rc dotfiles.bashrc Setting up a proper bash login environment is one of the last necessary steps for me. My.bashrc looks something like this: Then, as per the, “typically, your ~/.bash_profile contains the line”.

Download Lcg Jukebox Full Version For Nokia 6600