I am glad to hear that my last email helped you. Thank you very much for
sharing your results with us :-).
different media types, even having specific devices for each medya type. I
hope this do not cause confusion in the future :-).
writing to the Volume. At some point, when the Media record is updated,
I was able to see this behavior a few times. It happens very often if too
many concurrent jobs are run.
The first thing a job does is to reserve a volume. Suppose you have a
concurrent jobs.
Also, that MaximumVolumeJobs is set to 4, for example.
because it will not go for another volume.
set a value for "MaximumVolumeJobs" greater than 1.
Hope this helps, again.
Post by Jim RichardsonAna,
Thank you very much for your response. I understand what you are saying
and have adjusted my configuration. I have tested the jobs and I am now
seem to get the behavior I am looking for. Just a couple of notes.
It seems Bacula has unpredictable behavior when using Disk based
autochangers, a single file device, and a combination of a Maximum
Concurrent Jobs, UseVolumeOnce, and Maximum Volume Jobs. This behavior
manifests itself in two ways. One is the Waiting on Storage the other is
that jobs that start at the same time get confused and begin to use volumes
sql_create.c:387 Volume " D-994-D2D-HRMS-App.2017-07-17_19.00.00_02.bak "
already exists.
From my changes/testing, thanks to your guidance, the solution to both
seems to have the same number of file based devices as you want to have
jobs running concurrently. This seems to avoid the âWaiting on Storageâ
message and allows for expected concurrency without all the unpredictable
behavior.
I will continue to monitor the behavior over the next weekly cycle and let
you know if the configuration proves to produce the expected results.
# /etc/bacula/bacula-sd.conf
Autochanger {
Name = FileChgr
Device = DailyDevice1, DailyDevice2, DailyDevice3, WeeklyDevice1,
WeeklyDevice2, WeeklyDevice3, MonthlyDevice1, MonthlyDevice2, MonthlyDevice3
Changer Command = ""
Changer Device = /dev/null
}
Device {
Name = DailyDevice1
Media Type = DailyDisk
Archive Device = /backup/bacula/daily
Autochanger = yes;
LabelMedia = yes; # lets Bacula label unlabeled media
Random Access = Yes;
AutomaticMount = yes; # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DailyDevice2
Media Type = DailyDisk
Archive Device = /backup/bacula/daily
Autochanger = yes;
LabelMedia = yes; # lets Bacula label unlabeled media
Random Access = Yes;
AutomaticMount = yes; # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DailyDevice3
Media Type = DailyDisk
Archive Device = /backup/bacula/daily
Autochanger = yes;
LabelMedia = yes; # lets Bacula label unlabeled media
Random Access = Yes;
AutomaticMount = yes; # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = MonthlyDevice1
Media Type = MonthlyDisk
Archive Device = /backup/bacula/monthly
Autochanger = yes;
LabelMedia = yes; # lets Bacula label unlabeled media
Random Access = Yes;
AutomaticMount = yes; # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = MonthlyDevice2
Media Type = MonthlyDisk
Archive Device = /backup/bacula/monthly
Autochanger = yes;
LabelMedia = yes; # lets Bacula label unlabeled media
Random Access = Yes;
AutomaticMount = yes; # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = MonthlyDevice3
Media Type = MonthlyDisk
Archive Device = /backup/bacula/monthly
Autochanger = yes;
LabelMedia = yes; # lets Bacula label unlabeled media
Random Access = Yes;
AutomaticMount = yes; # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
*Jim Richardson*
CISSP CISA
Secur*IT*360
*Sent:* Sunday, July 16, 2017 8:40 PM
sourceforge.net
*Subject:* Re: [Bacula-users] Job is waiting on Storage
Hi Jim,
I will try to help here.
It seems to me your C2T-Data backup job is reading from disk and writing to tape.
The disk autochanger used by this job gor reading is "FileChgr" and it has
three devices each having a different media type (DailyDisk, WeeklyDisk and
MontlhyDisk).
In this case, only one drive will be able to use "DailyDisk" media types.
Since jobid=934 is using the DailyDevice for reading, you do not have any
other device to use for writing DaikyDisk medias and this is why
jobids=936-939 are waiting.
Please note this kind of disk autochanger configuration is not
recommended. Instead, all drives configured for one disk autochanger should
use the same media type.
I would recommend you to review your current settings to have one
autochanger to deal with only one specific media type.
In your case, you will need at least one drive to be used by the C2T-Data
backup job for reading and another drive to be used by any other backup job
for writing.
Hope this helps.
Best,
Ana
Bill, thank you for your response. The C2T "Cycle to Tape" jobs are
actually functioning properly. The first job takes longer, and I have one
tape drive. I am using Priority to ensure that the C2T-Data job completes
before the C2T-Archive job. The D2D "Daily to Disk" jobs use a different
set of devices. But, if this could be the root of my problem I will
investigate. To complete the picture, the priority of the C2T-Data job is
10, the C2T-Archive is 11 and the D2D jobs are 9 except one the D2D-Bacula
post backup job which is 99, due to wanting a clean backup after all jobs
are complete.
This is the behavior I am looking for: *from the 7.4.6 manual*: "Note
that only higher priority jobs will start early. Suppose the director will
allow two concurrent jobs, and that two jobs with priority 10 are running,
with two more in the queue. If a job with priority 5 is added to the queue,
it will be run as soon as one of the running jobs finishes. However, new
priority 10 jobs will not be run until the priority 5 job has finished."
It seems I am limited to only 2 connections to my Storage, but I canât see
where that is configured improperly.
As a quick rationale
My DIR allows for up to 20 concurrent
My SD allows for up to 20 concurrent
My FD allows for up to 20 concurrent
My Clients allow for up to 2 concurrent (by schedule will only happen on Sundays)
My Bacula Client allows for up to 10 concurrent (just in case)
My Storage allows for up to 10 concurrent for each of two types Daily2Disk
& Weekly2Disk and 1 concurrent for Cycle2Tape
TapeChanger (Dell TL1000)
- ULT3580 - /dev/nst0 (IBM LTO-7)
FileChanger
- Daily2Disk - Media-Type: Daily
- Weekly2Disk - Media-Type: Weekly
- Monthly2Disk - Media-Type: Monthly
Cycle2Tape begin daily at 6PM #-- Jobs will start first
Daily2Disk begin daily at 7PM #-- Jobs will start second except for Sundays
Daily2Disk-After Backup begin daily at 11:10 PM #-- Jobs will start last
Weekly2Disk begin Sunday at 12PM #-- Jobs will start first
-Run down
934 Back Diff 106,028 537.4 G C2T-Data is running <- Job starts
at 6PM with a priority of 10 no other jobs running
935 Back Diff 0 0 C2T-Archive is waiting for higher
priority jobs to finish <- Job starts at 6PM with a priority of 11 Job 934
is running job 935 waits
936 Back Full 19,943 13.58 G D2D-DC02-Application is running <-
Job starts at 7PM with a priority of 9, starts immediately just what we
want.
937 Back Full 0 0 D2D-HRMS-Application is waiting on
Storage "Storage_Daily2Disk" <- Job starts at 7PM with priority of 9, but
hangs, when it should start per concurrency settings and being the same
priority as 936
938 Back Full 0 0 D2D-Fish-Application is waiting on
Storage "Storage_Daily2Disk" <- Job starts at 7PM with priority of 9, but
hangs, when it should start per concurrency settings and being the same
priority as 936 & 937
939 Back Full 0 0 D2D-SPR01-Application is waiting
on Storage "Storage_Daily2Disk" <- Job starts at 7PM with priority of 9,
but hangs, when it should start per concurrency settings and being the same
priority as 936, 937, & 938
/etc/bacula/bacula-dir.conf
Director {
Name = bacula-dir
DIRport = 9101
QueryFile = "/etc/bacula/query.sql"
WorkingDirectory = "/backup/bacula/spool"
PidDirectory = "/var/run"
Maximum Concurrent Jobs = 20
Password = "*"
Messages = Daemon
}
############################################################
###################
#--- SCHEDULES
Schedule {
Name = "Daily2DiskCycle"
Run = Pool=Pool_Monthly2Disk 1st sun at 19:00
Run = Pool=Pool_Daily2Disk mon-sat at 19:00
Run = Pool=Pool_Weekly2Disk 2nd-5th sun at 19:00
}
Schedule {
Name = "Weekly2DiskCycle"
Run = Pool=Pool_Monthly2Disk 1st sun at 12:00
Run = Pool=Pool_Weekly2Disk sun at 12:00
}
Schedule {
Name = "Days-Diff-MTWHFSU"
Run = Full 1st sat at 19:00
Run = Differential mon-sun at 19:00
}
Schedule {
Name = "LogIT360_Cycle"
Run = Level=Full sat at 6:00
Run = Level=Differential sun at 18:00
Run = Level=Incremental mon at 18:00
Run = Level=Differential tue at 18:00
Run = Level=Incremental wed at 18:00
Run = Level=Differential thu at 18:00
Run = Level=Incremental fri at 18:00
}
# This schedule does the catalog. It starts after the WeeklyCycle
Schedule {
Name = "Daily2DiskCycle-AfterBackup"
Run = Pool=Pool_Monthly2Disk 1st sun at 23:10
Run = Pool=Pool_Daily2Disk mon-sat at 23:10
Run = Pool=Pool_Weekly2Disk 2nd-5th sun at 23:10
}
############################################################
###################
#--- DISK STORAGE OPTIONS
Storage {
Name = Storage_Daily2Disk
Address = backup.us.domain.com
SDPort = 9103
Password = "*"
Device = FileChgr
Media Type = DailyDisk
Maximum Concurrent Jobs = 10
}
Storage {
Name = Storage_Weekly2Disk
Address = backup.us.domain.com
SDPort = 9103
Password = "*"
Device = FileChgr
Media Type = WeeklyDisk
Maximum Concurrent Jobs = 10
}
Storage {
Name = Storage_Monthly2Disk
Address = backup.us.domain.com
SDPort = 9103
Password = "*"
Device = FileChgr
Media Type = MonthlyDisk
Maximum Concurrent Jobs = 10
}
############################################################
###################
#--- TAPE STORAGE OPTIONS
Storage {
Name = Tape
Address = backup.us.domain.com
SDPort = 9103
Password = "*"
Device = "ULT3580"
Media Type = LTO-7
Maximum Concurrent Jobs = 10
Autochanger = yes
}
############################################################
###################
#--- DEFAULT JOB DEFINITIONS
JobDefs {
Name = "Daily2Disk Jobs"
Type = Backup
Level = Full
Schedule = "Daily2DiskCycle"
Messages = Standard
SpoolAttributes = yes
Write Bootstrap = "/backup/bacula/spool/%c-%n-%j.bsr"
Priority = 9
Allow Mixed Priority = yes
}
JobDefs {
Name = "Weekly2Disk Jobs"
Type = Backup
Level = Full
Schedule = "Weekly2DiskCycle"
Messages = Standard
SpoolAttributes = yes
Write Bootstrap = "/backup/bacula/spool/%c-%n-%j.bsr"
Priority = 9
Allow Mixed Priority = yes
}
JobDefs {
Name = "Daily2Disk Bacula Catalog"
Type = Backup
Level = Full
Schedule = "Daily2DiskCycle-AfterBackup"
Messages = Standard
SpoolAttributes = no
Write Bootstrap = "/backup/bacula/spool/%c-%n-%j.bsr"
Priority = 99
}
JobDefs {
Name = "TapeJobs"
Type = Backup
Level = Full
Client = bacula-fd
Storage = Tape
Messages = Standard
SpoolAttributes = yes
Write Bootstrap = "/backup/bacula/spool/%c-%n-%j.bsr"
Priority = 10
Allow Mixed Priority = yes
}
############################################################
###################
##--- SAMPLE CLIENT
Client {
Name = sample-fd
Address = 10.1.X.X
FDPort = 9102
Catalog = MyCatalog
Password = "*"
File Retention = 60 days
Job Retention = 6 months
AutoPrune = yes
Maximum Concurrent Jobs = 2
}
#-- SERVER JOBS
Job {
Name = "W2D-Sample-System"
Client = sample-fd
JobDefs = "Weekly2Disk Jobs"
FileSet = "Windows-Sample-System"
Pool = Pool_Weekly2Disk
RunScript {
Command = "WBADMIN START SYSTEMSTATEBACKUP -backupTarget:E: -quiet"
RunsWhen = Before
RunsOnClient = yes
}
}
Job {
Name = "D2D-Sample-Application"
Client = sample-fd
JobDefs = "Daily2Disk Jobs"
FileSet = "Windows-Sample-Application"
Pool = Pool_Daily2Disk
}
FileSet {
Name = "Windows-Sample-Application"
Include {
Options {
signature = MD5
compression = GZIP
}
File = "E:/ShareFiles"
File = "E:/Shares/Share"
File = "C:/Shares/Scans"
}
}
FileSet {
Name = "Windows-Sample-System"
Include {
Options {
signature = MD5
compression = GZIP
}
File = "E:/WindowsImageBackup"
}
}
/etc/bacula/bacula-sd.conf
Storage {
Name = bacula-sd
SDPort = 9103
WorkingDirectory = "/backup/bacula/spool"
Pid Directory = "/var/run"
Maximum Concurrent Jobs = 20
}
Autochanger {
Name = FileChgr
Device = DailyDevice, WeeklyDevice, MonthlyDevice
Changer Command = ""
Changer Device = /dev/null
}
Device {
Name = DailyDevice
Media Type = DailyDisk
Archive Device = /backup/bacula/daily
Autochanger = yes;
LabelMedia = yes;
Random Access = Yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 10
}
Device {
Name = WeeklyDevice
Media Type = WeeklyDisk
Archive Device = /backup/bacula/weekly
Autochanger = yes;
LabelMedia = yes;
Random Access = Yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 10
}
Device {
Name = MonthlyDevice
Media Type = MonthlyDisk
Archive Device = /backup/bacula/monthly
Autochanger = yes;
LabelMedia = yes;
Random Access = Yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 10
}
Autochanger {
Name = "Dell-TL1000"
Device = ULT3580
Description = "Dell TL1000 (model IBM 3572-TL)"
Changer Device = /dev/sg5
Changer Command = "/usr/local/sbin/mtx-changer %c %o %S %a %d"
}
Device {
Name = ULT3580
Description = "IBM ULT3580-HH7"
Media Type = LTO-7
Archive Device = /dev/nst0
Label Media = yes
# Label Type = IBM;
AutomaticMount = yes;
AlwaysOpen = yes;
RemovableMedia = yes;
RandomAccess = no;
AutoChanger = yes;
Changer Device = /dev/sg5
Drive Index = 0
Spool Directory = /backup/bacula/spool
Changer Command = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"
# Enable the Alert command only if you have the mtx package loaded
Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
Maximum Concurrent Jobs = 1
}
/etc/bacula/bacula-fd.conf
FileDaemon {
Name = bacula-fd
FDport = 9102
WorkingDirectory = /var/spool/bacula
Pid Directory = /var/run
Maximum Concurrent Jobs = 20
Plugin Directory = /usr/lib64/bacula
}
Thanks you again and I hope we can find a resolution ð
Jim Richardson
-----Original Message-----
Sent: Friday, July 14, 2017 12:08 AM
Subject: Re: [Bacula-users] Job is waiting on Storage
I canât seem to get Bacula to run simultaneous jobs when using the
same storage device. Can anyone offer advice?
Console connected at 13-Jul-17 19:37
JobId Type Level Files Bytes Name Status
======================================================================
934 Back Diff 106,028 537.4 G C2T-Data is running
935 Back Diff 0 0 C2T-Archive is waiting for higher
priority jobs to finish
936 Back Full 19,943 13.58 G D2D-DC02-Application is running
937 Back Full 0 0 D2D-HRMS-Application is waiting
on
Storage "Storage_Daily2Disk"
938 Back Full 0 0 D2D-Fish-Application is waiting
on
Storage "Storage_Daily2Disk"
939 Back Full 0 0 D2D-SPR01-Application is waiting
on
Storage "Storage_Daily2Disk"
Hi Jim,
To me, it looks like your settings are correct regarding
MaximumConcurrentjobs (MCJ)...
What I think is going on here is that jobid 935 is holding everything else
up due to it having a different priority.
Notice that its status is: "waiting for higher priority jobs to finish"
Unless you have set "AllowMixedPriority" in your Job resources, then the
other jobs will wait until this one is finished. Personally, I do not
recommend that this be set, as it causes more confusion than clarity in my
opinion.
Just an FYI: The status "is waiting for higher priority jobs to finish",
in my humble opinion is not really 100% correct. It could be that it "is
waiting on LOWER priority jobs to finish", but the same message is printed
in both cases.
I think this message could be more specific to the actual case, or made
more generic to say "waiting on jobs of different priorities to finish, and
'AllowMixedPriority' not enabled..." (something like this)
I wonder why though, that jobid 936 (after 935) is listed as running...
Perhaps check its priority to see if it is the same as jobid 934 "C2T-Data"
If you set the "C2T-Archive" job's priority to the same priority as the
other backup jobs, then it will not be held up, and it will not hold up any
other queued jobs.
You can investigate the "AllowMixedPriority" option, but I think it may
not do what you want (exactly).
Another option is to set up some schedules to try to make sure this "Archive"
jobs is run when no other normal backup jobs are running.
Best regards,
Bill
--
Bill Arlofski
http://www.revpol.com/bacula
-- Not responsible for anything below this line --
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most engaging
tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Bacula-users mailing list
https://lists.sourceforge.net/lists/listinfo/bacula-users
CONFIDENTIALITY: This email (including any attachments) may contain
confidential, proprietary and privileged information, and unauthorized
disclosure or use is prohibited. If you received this email in error,
please notify the sender and delete this email from your system. Thank you.
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Bacula-users mailing list
https://lists.sourceforge.net/lists/listinfo/bacula-users