Skip to content
Commits on Source (15)
  • GDB Administrator's avatar
    Automatic date update in version.in · f4b228ee
    GDB Administrator authored
    f4b228ee
  • Tom de Vries's avatar
    [gdb/testsuite] Factor out proc get_portnum · b6dfea24
    Tom de Vries authored
    
    
    In gdbserver_start, we have some code that determines what port number to use:
    ...
        # Port id -- either specified in baseboard file, or managed here.
        if [target_info exists gdb,socketport] {
           set portnum [target_info gdb,socketport]
        } else {
           # Bump the port number to avoid conflicts with hung ports.
           incr portnum
        }
    ...
    
    Factor this out into a new proc get_portnum.
    
    Tested on aarch64-linux.
    
    Approved-By: default avatarTom Tromey <tom@tromey.com>
    b6dfea24
  • Tom de Vries's avatar
    [gdb/testsuite] Make portnum a persistent global · c42c12f9
    Tom de Vries authored
    
    
    When instrumenting get_portnum using:
    ...
        puts "PORTNUM: $res"
    ...
    and running:
    ...
    $ cd build/gdb
    $ make check TESTS=gdb.server/*.exp
    ...
    we get:
    ...
    Running gdb.server/target-exec-file.exp ...
    PORTNUM: 2345
    Running gdb.server/stop-reply-no-thread-multi.exp ...
    PORTNUM: 2345
    PORTNUM: 2346
    PORTNUM: 2347
    PORTNUM: 2348
    PORTNUM: 2349
    PORTNUM: 2350
    ...
    
    So, while get_portnum does return increasing numbers in a single test-case, it
    restarts at each test-case.
    
    This is a regression since the introduction of persistent globals.
    
    Fix this by using "gdb_persistent_global portnum", such that we get:
    ...
    Running gdb.server/target-exec-file.exp ...
    PORTNUM: 2345
    Running gdb.server/stop-reply-no-thread-multi.exp ...
    PORTNUM: 2346
    PORTNUM: 2347
    PORTNUM: 2348
    PORTNUM: 2349
    PORTNUM: 2350
    PORTNUM: 2351
    ...
    
    Tested on aarch64-linux.
    
    Approved-By: default avatarTom Tromey <tom@tromey.com>
    c42c12f9
  • Tom de Vries's avatar
    [gdb/testsuite] Factor out proc with_lock · fbb0edfe
    Tom de Vries authored
    
    
    Factor out proc with_lock from with_rocm_gpu_lock, and move required procs
    lock_file_acquire and lock_file_release to lib/gdb-utils.exp.
    
    Tested on aarch64-linux.
    
    Approved-By: default avatarTom Tromey <tom@tromey.com>
    fbb0edfe
  • Tom de Vries's avatar
    [gdb/testsuite] Factor out proc lock_dir · 007a7cb6
    Tom de Vries authored
    
    
    In lib/rocm.exp we have:
    ...
    set gpu_lock_filename $objdir/gpu-parallel.lock
    ...
    
    This decides both the lock file name and directory.
    
    Factor out a new proc lock_dir that decides on the directory, leaving just:
    ...
    set gpu_lock_filename gpu-parallel.lock
    ...
    
    Tested on aarch64-linux.
    
    Approved-By: default avatarTom Tromey <tom@tromey.com>
    007a7cb6
  • Tom de Vries's avatar
    [gdb/testsuite] Move gpu-parallel.lock to cache dir · a0a6e110
    Tom de Vries authored
    
    
    The lock directory returned by lock_dir is currently $objdir.
    
    It seems possible to leave a stale lock file that blocks progress in a
    following run.
    
    Fix this by using a directory that is guaranteed to be initially empty when
    using GDB_PARALLEL, like temp or cache.
    
    In gdb/testsuite/README I found:
    ...
    cache in particular is used to share data across invocations of runtest
    ...
    which seems appropriate, so let's use cache for this.
    
    Tested on aarch64-linux.
    
    Approved-By: default avatarTom Tromey <tom@tromey.com>
    a0a6e110
  • Tom de Vries's avatar
    [gdb/testsuite] Use unique portnum in parallel testing · e82dca2a
    Tom de Vries authored
    
    
    When instrumenting get_portnum using:
    ...
    puts "PORTNUM: $res"
    ...
    and running:
    ...
    $ cd build/gdb
    $ make check-parallel -j2 TESTS=gdb.server/*.exp
    ...
    we run into:
    ...
    Running gdb.server/abspath.exp ...
    PORTNUM: 2345
    ...
    and:
    ...
    Running gdb.server/bkpt-other-inferior.exp ...
    PORTNUM: 2345
    ...
    
    This is because the test-cases are run in independent runtest invocations.
    
    Fix this by handling the parallel case in get_portnum using:
    - a file $objdir/cache/portnum to keep the portnum variable, and
    - a file $objdir/cache/portnum.lock to serialize access to it.
    
    Tested on aarch64-linux.
    
    Approved-By: default avatarTom Tromey <tom@tromey.com>
    e82dca2a
  • Tom de Vries's avatar
    [gdb/testsuite] Use unique portnum in parallel testing (check//% case) · c479e964
    Tom de Vries authored
    
    
    Make target check//% is the gdb variant of a similar gcc make target [1].
    
    When running tests using check//%:
    ...
    $ cd build/gdb
    $ make check//unix/{-fPIE/-pie,-fno-PIE/-no-pie} -j2 TESTS=gdb.server/*.exp
    ...
    we get:
    ...
    $ cat build/gdb/testsuite.unix.-fPIE.-pie/cache/portnum
    2427
    $ cat build/gdb/testsuite.unix.-fno-PIE.-no-pie/cache/portnum
    2423
    ...
    
    The problem is that there are two portnum files used in parallel.
    
    Fix this by:
    - creating a common lockdir build/gdb/testsuite.lockdir for make target
      check//%,
    - passing this down to the runtests invocations using variable GDB_LOCK_DIR,
      and
    - using GDB_LOCK_DIR in lock_dir.
    
    Tested on aarch64-linux.
    
    Approved-By: default avatarTom Tromey <tom@tromey.com>
    
    PR testsuite/31632
    Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31632
    
    [1] https://gcc.gnu.org/install/test.html
    c479e964
  • Alan Modra's avatar
    bus error with fuzzed archive element · c7a1fe22
    Alan Modra authored
    	* libbfd.c (bfd_mmap_local): Sanity check rsize against actual
    	file offset and size, not an archive element offset and size.
    c7a1fe22
  • Tom Tromey's avatar
    Remove call to dwarf2_per_objfile::adjust from ranges readers · 91fc201e
    Tom Tromey authored
    
    
    dwarf2_per_objfile::adjust applies gdbarch_adjust_dwarf2_addr to an
    address, leaving the result unrelocated.  However, this adjustment is
    only needed for text-section symbols -- it isn't needed for any sort
    of address mapping.  Therefore, these calls can be removed from
    read_addrmap_from_aranges and create_addrmap_from_gdb_index.
    
    Approved-By: default avatarAndrew Burgess <aburgess@redhat.com>
    
    
    91fc201e
  • Tom Tromey's avatar
    Remove more calls to dwarf2_per_objfile::adjust · a5a40101
    Tom Tromey authored
    As with the previous patch, this patch removes some calls to
    dwarf2_per_objfile::adjust.  These calls are not needed by the cooked
    indexer, as it does not create symbols or look up symbols by address.
    
    The call in dwarf2_ranges_read is similarly not needed, as it is only
    used to update an addrmap; and in any case I believe this particular
    call is only reached by the indexer.
    
    
    a5a40101
  • Tom Tromey's avatar
    Remove call to dwarf2_per_objfile::adjust from read_call_site_scope · 6142f7cd
    Tom Tromey authored
    read_call_site_scope does not need to call 'adjust', because in
    general the call site is not a symbol address, but rather just the
    address of some particular call.
    
    
    6142f7cd
  • Tom Tromey's avatar
    Remove call to dwarf2_per_objfile::adjust from read_attribute_value · 12fddc10
    Tom Tromey authored
    Currently, read_attribute_value calls dwarf2_per_objfile::adjust on
    any address.  This seems wrong, because the address may not even be in
    the text section.
    
    Luckily, this call is also not needed, because read_func_scope calls
    'relocate', which does the same work.
    
    
    12fddc10
  • Tom Tromey's avatar
    Remove dwarf2_per_objfile::adjust · b42d6854
    Tom Tromey authored
    All the calls to dwarf2_per_objfile::adjust have been removed, so we
    can remove this function entirely.
    
    Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31261
    
    
    b42d6854
  • Hannes Domani's avatar
    Fix heap-use-after-free in index-cached with --disable-threading · 5140d8e0
    Hannes Domani authored
    If threads are disabled, either by --disable-threading explicitely, or by
    missing std::thread support, you get the following ASAN error when
    loading symbols:
    
    ==7310==ERROR: AddressSanitizer: heap-use-after-free on address 0x614000002128 at pc 0x00000098794a bp 0x7ffe37e6af70 sp 0x7ffe37e6af68
    READ of size 1 at 0x614000002128 thread T0
        #0 0x987949 in index_cache_store_context::store() const ../../gdb/dwarf2/index-cache.c:163
        #1 0x943467 in cooked_index_worker::write_to_cache(cooked_index const*, deferred_warnings*) const ../../gdb/dwarf2/cooked-index.c:601
        #2 0x1705e39 in std::function<void ()>::operator()() const /gcc/9/include/c++/9.2.0/bits/std_function.h:690
        #3 0x1705e39 in gdb::task_group::impl::~impl() ../../gdbsupport/task-group.cc:38
    
    0x614000002128 is located 232 bytes inside of 408-byte region [0x614000002040,0x6140000021d8)
    freed by thread T0 here:
        #0 0x7fd75ccf8ea5 in operator delete(void*, unsigned long) ../../.././libsanitizer/asan/asan_new_delete.cc:177
        #1 0x9462e5 in cooked_index::index_for_writing() ../../gdb/dwarf2/cooked-index.h:689
        #2 0x9462e5 in operator() ../../gdb/dwarf2/cooked-index.c:657
        #3 0x9462e5 in _M_invoke /gcc/9/include/c++/9.2.0/bits/std_function.h:300
    
    It's happening because cooked_index_worker::wait always returns true in
    this case, which tells cooked_index::wait it can delete the m_state
    cooked_index_worker member, but cooked_index_worker::write_to_cache tries
    to access it immediately afterwards.
    
    Fixed by making cooked_index_worker::wait only return true if desired_state
    is CACHE_DONE, same as if threading was enabled, so m_state will not be
    prematurely deleted.
    
    Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31694
    
    
    Approved-By: default avatarTom Tromey <tom@tromey.com>
    5140d8e0
......@@ -1072,7 +1072,18 @@ static void *
bfd_mmap_local (bfd *abfd, size_t rsize, int prot, void **map_addr,
size_t *map_size)
{
ufile_ptr filesize = bfd_get_file_size (abfd);
/* We mmap on the underlying file. In an archive it might be nice
to limit RSIZE to the element size, but that can be fuzzed and
the offset returned by bfd_tell is relative to the start of the
element. Therefore to reliably stop access beyond the end of a
file (and resulting bus errors) we must work with the underlying
file offset and size, and trust that callers will limit access to
within an archive element. */
while (abfd->my_archive != NULL
&& !bfd_is_thin_archive (abfd->my_archive))
abfd = abfd->my_archive;
ufile_ptr filesize = bfd_get_size (abfd);
ufile_ptr offset = bfd_tell (abfd);
if (filesize < offset || filesize - offset < rsize)
{
......
......@@ -16,7 +16,7 @@
In releases, the date is not included in either version strings or
sonames. */
#define BFD_VERSION_DATE 20240503
#define BFD_VERSION_DATE 20240504
#define BFD_VERSION @bfd_version@
#define BFD_VERSION_STRING @bfd_version_package@ @bfd_version_string@
#define REPORT_BUGS_TO @report_bugs_to@
......@@ -2003,6 +2003,10 @@ check-all-boards: force
$(MAKE) $(TARGET_FLAGS_TO_PASS) check-all-boards; \
else true; fi
testsuite.lockdir: force
rm -rf $@
mkdir -p $@
# The idea is to parallelize testing of multilibs, for example:
# make -j3 check//sh-hms-sim/{-m1,-m2,-m3,-m3e,-m4}/{,-nofpu}
# will run 3 concurrent sessions of check, eventually testing all 10
......@@ -2011,7 +2015,7 @@ check-all-boards: force
# used, this rule will harmlessly fail to match. Used FORCE_PARALLEL to
# prevent serialized checking due to the passed RUNTESTFLAGS.
# FIXME: use config.status --config not --version, when available.
check//%: force
check//%: force testsuite.lockdir
@if [ -f testsuite/config.status ]; then \
rootme=`pwd`; export rootme; \
rootsrc=`cd $(srcdir); pwd`; export rootsrc; \
......@@ -2029,7 +2033,7 @@ check//%: force
); \
else :; fi && cd $$testdir && \
$(MAKE) $(TARGET_FLAGS_TO_PASS) \
RUNTESTFLAGS="--target_board=$$variant $(RUNTESTFLAGS)" \
RUNTESTFLAGS="GDB_LOCK_DIR=$$rootme/testsuite.lockdir --target_board=$$variant $(RUNTESTFLAGS)" \
FORCE_PARALLEL=$(if $(FORCE_PARALLEL),1,$(if $(RUNTESTFLAGS),,1)) \
"$$target"; \
else true; fi
......
......@@ -190,8 +190,6 @@ read_addrmap_from_aranges (dwarf2_per_objfile *per_objfile,
continue;
}
ULONGEST end = start + length;
start = (ULONGEST) per_objfile->adjust ((unrelocated_addr) start);
end = (ULONGEST) per_objfile->adjust ((unrelocated_addr) end);
mutable_map->set_empty (start, end - 1, per_cu);
}
......
......@@ -513,7 +513,7 @@ cooked_index_worker::wait (cooked_state desired_state, bool allow_quit)
#else
/* Without threads, all the work is done immediately on the main
thread, and there is never anything to wait for. */
done = true;
done = desired_state == cooked_state::CACHE_DONE;
#endif /* CXX_STD_THREAD */
/* Only the main thread is allowed to report complaints and the
......
......@@ -567,8 +567,6 @@ create_addrmap_from_gdb_index (dwarf2_per_objfile *per_objfile,
continue;
}
lo = (ULONGEST) per_objfile->adjust ((unrelocated_addr) lo);
hi = (ULONGEST) per_objfile->adjust ((unrelocated_addr) hi);
mutable_map.set_empty (lo, hi - 1, per_bfd->get_cu (cu_index));
}
......
......@@ -1210,17 +1210,6 @@ dwarf2_invalid_attrib_class_complaint (const char *arg1, const char *arg2)
 
/* See read.h. */
 
unrelocated_addr
dwarf2_per_objfile::adjust (unrelocated_addr addr)
{
CORE_ADDR baseaddr = objfile->text_section_offset ();
CORE_ADDR tem = (CORE_ADDR) addr + baseaddr;
tem = gdbarch_adjust_dwarf2_addr (objfile->arch (), tem);
return (unrelocated_addr) (tem - baseaddr);
}
/* See read.h. */
CORE_ADDR
dwarf2_per_objfile::relocate (unrelocated_addr addr)
{
......@@ -10231,7 +10220,7 @@ read_call_site_scope (struct die_info *die, struct dwarf2_cu *cu)
sect_offset_str (die->sect_off), objfile_name (objfile));
return;
}
unrelocated_addr pc = per_objfile->adjust (attr->as_address ());
unrelocated_addr pc = attr->as_address ();
 
if (cu->call_site_htab == NULL)
cu->call_site_htab = htab_create_alloc_ex (16, call_site::hash,
......@@ -10406,10 +10395,7 @@ read_call_site_scope (struct die_info *die, struct dwarf2_cu *cu)
"low pc, for referencing DIE %s [in module %s]"),
sect_offset_str (die->sect_off), objfile_name (objfile));
else
{
lowpc = per_objfile->adjust (lowpc);
call_site->target.set_loc_physaddr (lowpc);
}
call_site->target.set_loc_physaddr (lowpc);
}
}
else
......@@ -10919,7 +10905,6 @@ dwarf2_ranges_read (unsigned offset, unrelocated_addr *low_return,
unrelocated_addr *high_return, struct dwarf2_cu *cu,
addrmap_mutable *map, void *datum, dwarf_tag tag)
{
dwarf2_per_objfile *per_objfile = cu->per_objfile;
int low_set = 0;
unrelocated_addr low = {};
unrelocated_addr high = {};
......@@ -10930,13 +10915,10 @@ dwarf2_ranges_read (unsigned offset, unrelocated_addr *low_return,
{
if (map != nullptr)
{
unrelocated_addr lowpc;
unrelocated_addr highpc;
lowpc = per_objfile->adjust (range_beginning);
highpc = per_objfile->adjust (range_end);
/* addrmap only accepts CORE_ADDR, so we must cast here. */
map->set_empty ((CORE_ADDR) lowpc, (CORE_ADDR) highpc - 1, datum);
map->set_empty ((CORE_ADDR) range_beginning,
(CORE_ADDR) range_end - 1,
datum);
}
 
/* FIXME: This is recording everything as a low-high
......@@ -15996,14 +15978,11 @@ cooked_indexer::check_bounds (cutu_reader *reader)
cu, m_index_storage->get_addrmap (), cu->per_cu);
if (cu_bounds_kind == PC_BOUNDS_HIGH_LOW && best_lowpc < best_highpc)
{
dwarf2_per_objfile *per_objfile = cu->per_objfile;
unrelocated_addr low = per_objfile->adjust (best_lowpc);
unrelocated_addr high = per_objfile->adjust (best_highpc);
/* Store the contiguous range if it is not empty; it can be
empty for CUs with no code. addrmap requires CORE_ADDR, so
we cast here. */
m_index_storage->get_addrmap ()->set_empty ((CORE_ADDR) low,
(CORE_ADDR) high - 1,
m_index_storage->get_addrmap ()->set_empty ((CORE_ADDR) best_lowpc,
(CORE_ADDR) best_highpc - 1,
cu->per_cu);
 
cu->per_cu->addresses_seen = true;
......@@ -16309,13 +16288,10 @@ cooked_indexer::scan_attributes (dwarf2_per_cu_data *scanning_per_cu,
 
if (*high_pc > *low_pc)
{
dwarf2_per_objfile *per_objfile = reader->cu->per_objfile;
unrelocated_addr lo = per_objfile->adjust (*low_pc);
unrelocated_addr hi = per_objfile->adjust (*high_pc);
/* Need CORE_ADDR casts for addrmap. */
m_index_storage->get_addrmap ()->set_empty ((CORE_ADDR) lo,
(CORE_ADDR) hi - 1,
scanning_per_cu);
m_index_storage->get_addrmap ()->set_empty
((CORE_ADDR) *low_pc, (CORE_ADDR) *high_pc - 1,
scanning_per_cu);
}
}
 
......@@ -17023,7 +16999,6 @@ read_attribute_value (const struct die_reader_specs *reader,
{
unrelocated_addr addr = cu_header->read_address (abfd, info_ptr,
&bytes_read);
addr = per_objfile->adjust (addr);
attr->set_address (addr);
info_ptr += bytes_read;
}
......
......@@ -692,10 +692,6 @@ struct dwarf2_per_objfile
any that are too old. */
void age_comp_units ();
/* Apply any needed adjustments to ADDR, returning an adjusted but
still unrelocated address. */
unrelocated_addr adjust (unrelocated_addr addr);
/* Apply any needed adjustments to ADDR and then relocate the
address according to the objfile's section offsets, returning a
relocated address. */
......
......@@ -138,3 +138,74 @@ proc version_compare { l1 op l2 } {
}
return 1
}
# Acquire lock file LOCKFILE. Tries forever until the lock file is
# successfully created.
proc lock_file_acquire {lockfile} {
verbose -log "acquiring lock file: $::subdir/${::gdb_test_file_name}.exp"
while {true} {
if {![catch {open $lockfile {WRONLY CREAT EXCL}} rc]} {
set msg "locked by $::subdir/${::gdb_test_file_name}.exp"
verbose -log "lock file: $msg"
# For debugging, put info in the lockfile about who owns
# it.
puts $rc $msg
flush $rc
return [list $rc $lockfile]
}
after 10
}
}
# Release a lock file.
proc lock_file_release {info} {
verbose -log "releasing lock file: $::subdir/${::gdb_test_file_name}.exp"
if {![catch {fconfigure [lindex $info 0]}]} {
if {![catch {
close [lindex $info 0]
file delete -force [lindex $info 1]
} rc]} {
return ""
} else {
return -code error "Error releasing lockfile: '$rc'"
}
} else {
error "invalid lock"
}
}
# Return directory where we keep lock files.
proc lock_dir {} {
if { [info exists ::GDB_LOCK_DIR] } {
# When using check//.
return $::GDB_LOCK_DIR
}
return [make_gdb_parallel_path cache]
}
# Run body under lock LOCK_FILE.
proc with_lock { lock_file body } {
if {[info exists ::GDB_PARALLEL]} {
set lock_file [file join [lock_dir] $lock_file]
set lock_rc [lock_file_acquire $lock_file]
}
set code [catch {uplevel 1 $body} result]
if {[info exists ::GDB_PARALLEL]} {
lock_file_release $lock_rc
}
if {$code == 1} {
global errorInfo errorCode
return -code $code -errorinfo $errorInfo -errorcode $errorCode $result
} else {
return -code $code $result
}
}
......@@ -129,8 +129,60 @@ proc gdb_target_cmd { args } {
return [expr $res == 0 ? 0 : 1]
}
global portnum
set portnum "2345"
# Return a usable port number.
proc get_portnum {} {
if { [target_info exists gdb,socketport] } {
# Hard-coded in target board.
return [target_info gdb,socketport]
}
# Not hard-coded in target board. Return increasing port numbers,
# starting at $initial_portnum, to avoid conflicts with hung ports.
set initial_portnum 2345
if { ![info exists ::GDB_PARALLEL] } {
# Sequential case.
# Currently available port number.
gdb_persistent_global portnum
# Initialize, if necessary.
if { ![info exists portnum] } {
set portnum $initial_portnum
}
# Return currently available port number, and update it.
set res $portnum
incr portnum
return $res
}
# Parallel case.
with_lock portnum.lock {
# Keep portnum file alongside the lock that guards it.
set portnum_file [lock_dir]/portnum
if { [file exists $portnum_file] } {
set fd [open $portnum_file r]
set portnum [read $fd]
close $fd
set portnum [string trim $portnum]
} else {
# Initialize.
set portnum $initial_portnum
}
set next_portnum [expr $portnum + 1]
set fd [open $portnum_file w]
puts $fd $next_portnum
close $fd
}
return $portnum
}
# Locate the gdbserver binary. Returns "" if gdbserver could not be found.
......@@ -247,16 +299,10 @@ proc gdbserver_default_get_comm_port { port } {
# Returns the target protocol and socket to connect to.
proc gdbserver_start { options arguments } {
global portnum
global GDB_TEST_SOCKETHOST
# Port id -- either specified in baseboard file, or managed here.
if [target_info exists gdb,socketport] {
set portnum [target_info gdb,socketport]
} else {
# Bump the port number to avoid conflicts with hung ports.
incr portnum
}
set portnum [get_portnum]
# Extract the local and remote host ids from the target board struct.
if { [info exists GDB_TEST_SOCKETHOST] } {
......@@ -372,10 +418,11 @@ proc gdbserver_start { options arguments } {
-re "Listening on" { }
-re "Can't (bind address|listen on socket): Address already in use\\.\r\n" {
verbose -log "Port $portnum is already in use."
if ![target_info exists gdb,socketport] {
set other_portnum [get_portnum]
if { $other_portnum != $portnum } {
# Bump the port number to avoid the conflict.
wait -i $expect_out(spawn_id)
incr portnum
set portnum $other_portnum
continue
}
}
......
......@@ -106,70 +106,17 @@ gdb_caching_proc allow_hipcc_tests {} {
# The lock file used to ensure that only one GDB has access to the GPU
# at a time.
set gpu_lock_filename $objdir/gpu-parallel.lock
# Acquire lock file LOCKFILE. Tries forever until the lock file is
# successfully created.
proc lock_file_acquire {lockfile} {
verbose -log "acquiring lock file: $::subdir/${::gdb_test_file_name}.exp"
while {true} {
if {![catch {open $lockfile {WRONLY CREAT EXCL}} rc]} {
set msg "locked by $::subdir/${::gdb_test_file_name}.exp"
verbose -log "lock file: $msg"
# For debugging, put info in the lockfile about who owns
# it.
puts $rc $msg
flush $rc
return [list $rc $lockfile]
}
after 10
}
}
# Release a lock file.
proc lock_file_release {info} {
verbose -log "releasing lock file: $::subdir/${::gdb_test_file_name}.exp"
if {![catch {fconfigure [lindex $info 0]}]} {
if {![catch {
close [lindex $info 0]
file delete -force [lindex $info 1]
} rc]} {
return ""
} else {
return -code error "Error releasing lockfile: '$rc'"
}
} else {
error "invalid lock"
}
}
set gpu_lock_filename gpu-parallel.lock
# Run body under the GPU lock. Also calls gdb_exit before releasing
# the GPU lock.
proc with_rocm_gpu_lock { body } {
if {[info exists ::GDB_PARALLEL]} {
set lock_rc [lock_file_acquire $::gpu_lock_filename]
}
set code [catch {uplevel 1 $body} result]
with_lock $::gpu_lock_filename $body
# In case BODY returned early due to some testcase failing, and
# left GDB running, debugging the GPU.
gdb_exit
if {[info exists ::GDB_PARALLEL]} {
lock_file_release $lock_rc
}
if {$code == 1} {
global errorInfo errorCode
return -code $code -errorinfo $errorInfo -errorcode $errorCode $result
} else {
return -code $code $result
}
}
# Return true if all the devices support debugging multiple processes
......